Search This Blog

Monday, February 23, 2026

Student ICE Protests Lead To Lockdowns, Debate Over Discipline In Pennsylvania Schools

  by Janice Hisle via The Epoch Times (emphasis ours),

School officials ordered two eastern Pennsylvania schools into lockdown on Feb. 20, while dozens of students left the schools and became unruly. The move came after officials directed the students to cancel their planned protest against Immigration and Customs Enforcement operations.

High school students gather for an anti-Immigration and Customs Enforcement protest outside the Minnesota Capitol in St. Paul on Jan. 14, 2026. Octavio Jonees/AFP via Getty Images

Quakertown High School and Quakertown Elementary School, about 50 miles north of Philadelphia, were locked down for nearly two hours.

School officials took the action after police notified them that high schoolers, who had left the building without permission, “were engaging in unsafe and disruptive behavior in town,” acting Superintendent Lisa Hoffman wrote on the Quakertown Community School District website.

Her statement provides no further details about the students’ behavior, but CBS News reported that five students were arrested.

Video footage posted on X shows Quakertown police struggling to put a person into the back of a police SUV as a crowd mills around and some people shout. When an ambulance arrives, a man in plain clothes exits an unmarked vehicle, dabbing what appears to be a bloody nose while officers ask whether he is OK.

School officials said they were waiting for more information from the police regarding reports of students’ actions. A Quakertown police sergeant told The Epoch Times that he was not permitted to release a statement from the borough’s police administration.

Earlier in the day, Quakertown school officials had notified families and students that a planned “student-led walkout should no longer occur,” Hoffman wrote. District leaders made that decision after consulting with law enforcement over “a potential safety concern” in connection with the walkout.

However, in defiance of that directive, about 35 Quakertown High School students left the building at about 11:30 a.m. Immediately, administrators worked with police and locked down the high school and the elementary school, stopping anyone from entering or leaving the buildings, Hoffman said.

“Students in both schools maintained their normal school day activities,” Hoffman wrote, and the lockdown was lifted at about 1:15 p.m.

Meanwhile, in Spring Township, near Reading, Pennsylvania, the Wilson School District issued a statement addressing a widely circulated video showing Daniel Weber, principal of Wilson High School, telling student protesters that they would be suspended if they did not return to class.

In response to “numerous” phone calls and emails about the video, Superintendent Chris Trickett posted a statement on Feb. 19, a day after Weber addressed the group amid an unauthorized walkout.

Trickett said the video “captures only a portion of the interaction between school staff and students.”

Further, he wrote, “The situation was particularly challenging because we had been informed that the demonstration would not take place.”

A careful review of the circumstances revealed that no one was disciplined for expressing political views, the superintendent said. Rather, disciplinary action was based on violations of the student handbook, including “leaving class or the building without permission,” he said.

“Longstanding legal guidance, including the U.S. Supreme Court’s decision in Tinker v. Des Moines, affirms that students do not ’shed their constitutional rights to freedom of speech or expression at the schoolhouse gate,'” Trickett wrote, referring to that 1969 landmark ruling.

However, Trickett wrote, “the Court made clear that schools may take action when conduct materially disrupts the educational environment or compromises student safety.” Further, schools can and must regulate demonstrations “in alignment with school rules and policies,” he said.

“Our response reflects this balance, between protecting student expression and fulfilling our responsibility to maintain safe and effective school operations,” Trickett said.

https://www.zerohedge.com/political/student-ice-protests-lead-lockdowns-debate-over-discipline-pennsylvania-schools

DNC Covered Up Its 2024 Election Autopsy, And Now We Know Why

 After the 2024 presidential election, the Democratic National Committee conducted an autopsy of the party’s defeat and intended to release it.

It pledged an honest accounting of how Donald Trump reclaimed the White House. It assured its own officials, strategists, and donor class that a thorough post-mortem was coming.

However, after the autopsy was complete, the DNC clammed up and kept it under wraps.

There was something in the report they didn’t want the public to see, and Democrats weren’t happy about it.

The official explanation for suppressing the report is that releasing it would distract from the party's focus on winning back Congress in 2026 and not be distracted by the past.

That explanation doesn’t hold up.

Several Democrats, including advisers to potential 2028 presidential hopefuls, have argued that burying this report conveniently shields Harris from accountability runs again, while also protecting the consultant class whose strategic decisions contributed to the loss.

"I suspect the reasons why this isn't being released are precisely the reasons why it should be released,” Lis Smith, a longtime adviser to Pete Buttigieg, said in a post on X last year.

“The DNC's actual position is that if the public knew more about what Democrats got wrong in the last election, it would hurt the party's chances in the next election,” former Obama speechwriter Jon Favreau wrote.

Favreau was more right than he realized. Because we know now what the DNC didn’t want the public to know.

According to a report from Axios, DNC staff members working on the report held a private meeting with the IMEU Policy Project, a pro-Palestinian advocacy organization, specifically to discuss the electoral impact of U.S. policy toward Israel.

Hamid Bendaas, a representative for the group, said the DNC acknowledged in that meeting that "their own data also indicated that this policy was, in their assessment, a 'negative' for the 2024 election." 

Two additional senior IMEU Policy Project members independently confirmed that the DNC reached the same conclusion.

Axios separately verified that Democratic officials involved in the analysis found the Gaza issue hurt the party's appeal with certain voter blocs.

Harris spent much of 2024 trying to navigate Israel-Gaza without alienating either side. She expressed firm support for Israel while also calling for a ceasefire and voicing empathy for Palestinian civilians.

It was a strategy that failed to satisfy the pro-Palestinian wing of the party, which is largely made up of younger voters and older progressives who had already grown skeptical of the administration's backing of Israel, and proved particularly difficult to retain.

The autopsy appears to suggest that the party’s ability to succeed in the future requires it to be unequivocally anti-Israel.

DNC spokesperson Kendall Witmer denied the claim that findings related to Israel are driving the suppression of the report; however, even Kamala Harris seems to have confirmed the autopsy report’s findings.

During an event for her 107 Days book tour, Harris said the administration “should have done more” and “should have spoken publicly” about its criticism of Netanyahu’s handling of the war.

In the memoir, she wrote that Biden’s “perceived blank check” to Israel hurt her 2024 campaign and revealed she had privately urged him to show greater empathy for Gazan civilians even as she refused to break with him publicly. 

Democrats are now staring at an uncomfortable reality: their internal diagnosis is pushing them further down an explicitly anti-Israel path, and now everyone knows it.

https://www.zerohedge.com/political/dnc-covered-its-2024-election-autopsy-and-now-we-know-why.

Kaiser sues liability insurers over $556M Medicare Advantage settlement

 Oakland, Calif.-based Kaiser Permanente is suing nine liability insurers for breach of contract, alleging they have refused to cover any portion of its $556 million settlement with the federal government resolving False Claims Act allegations tied to Medicare Advantage billing practices.

Kaiser Foundation Health Plan and its Colorado affiliate filed the complaint Feb. 20 in the U.S. District Court for the Northern District of California, seeking up to $95 million in coverage across a layered program of directors and officers liability policies. The defendants include AIG, Chubb, Berkley, Starr, National Fire, RSUI, Markel, Fair American and Allianz.

The dispute stems from Kaiser’s settlement in January of a consolidated whistleblower action, which encompassed six qui tam lawsuits and a complaint-in-intervention by the Department of Justice. The underlying case alleged Kaiser violated the False Claims Act by submitting improper risk-adjusting diagnosis codes to CMS to inflate Medicare Advantage payments.

According to the complaint, AIG issued a primary D&O policy providing $10 million in coverage above a $10 million self-insured retention, which Kaiser has satisfied. The eight excess insurers issued policies providing an additional $85 million in coverage. AIG acknowledged the lawsuit was a covered claim but paid only $1 million toward Kaiser’s legal defense, citing a policy provision that limits coverage for claims involving the return of government funds. AIG denied coverage for the settlement itself, and all eight excess insurers followed suit.

Kaiser argues the exclusion does not apply to the full settlement. Because the government sought treble damages under the False Claims Act, Kaiser contends a large portion of the settlement represents multiplied damages that go beyond any funds it received from CMS and should therefore be covered under the policies.

Between 2009 and 2018, Kaiser allegedly engaged in a scheme to inflate risk adjustment payments in California and Colorado, the Justice Department alleged. Prosecutors said Kaiser used internal data-mining tools to identify diagnoses from patients’ past medical histories that had not been submitted to CMS and then sent “queries” to physicians urging them to add those diagnoses through addenda, sometimes months or more than a year after the original visit.

In many cases, the added diagnoses had no connection to the patient visit, the Justice Department alleged.

The government also alleged Kaiser set aggressive diagnosis submission targets for physicians and facilities, flagged underperforming providers, and tied financial incentives and bonuses to meeting risk adjustment goals. Internal compliance audits and physician complaints raised concerns about the legality of the practices, but Kaiser allegedly continued them.

“We chose to settle to avoid the delay, uncertainty and cost of prolonged litigation,” the health system said in a previous statement. “Multiple major health plans have faced similar government scrutiny over Medicare Advantage risk adjustment standards and practices, reflecting industrywide challenges in applying these requirements. The Kaiser Permanente case was not about the quality of care our members received. It involved a dispute about how to interpret the Medicare risk adjustment program’s documentation requirements.”

Becker’s has reached out to Kaiser and the liability insurers for comment and will update this article if more information becomes available.

https://www.beckershospitalreview.com/finance/kaiser-sues-liability-insurers-over-556m-medicare-advantage-settlement/

AHA urges HHS to align AI rules with existing healthcare regulations

 The American Hospital Association is calling on federal health officials to reduce regulatory barriers and ensure clinician oversight as artificial intelligence becomes more integrated into clinical care.

In a Feb. 23 letter to the Department of Health and Human Services, the AHA outlined recommendations in response to the agency’s request for information on accelerating AI adoption in healthcare.

Here are five things to know:

  1. The association urged HHS to align new AI policies with existing regulatory frameworks rather than creating standalone rules, arguing duplicative or overly restrictive regulations could hamper innovation. It specifically called on the agency to withdraw the proposed 2024 HIPAA Security Rule update, saying certain provisions — including a 72-hour system restoration requirement after cyberattacks — would be infeasible and could increase risk.

  2. The AHA also pressed for stronger federal HIPAA preemption to address what it described as a patchwork of state privacy laws that increase compliance costs and impede data sharing critical to AI development. The group also urged removal of remaining 42 CFR Part 2 requirements requiring separate handling of substance use disorder records.

  3. A key focus of the letter was insurer use of AI in prior authorization and coverage determinations. The AHA said clinicians — not AI tools alone — should be involved in decisions resulting in partial or full denials of care and called for greater transparency around how algorithms are used.

  4. On reimbursement, the association said Medicare payment structures do not fully account for the costs of developing, deploying and maintaining AI tools and warned payment updates for AI services should not come at the expense of other medical services. It cited costs including clinical validation time, maintenance, cybersecurity insurance and software and data storage.

  5. The AHA also recommended third-party vendors, including AI developers handling protected health information, be held to the same privacy and security standards as HIPAA-covered entities and called for risk-based post-deployment monitoring standards for AI-enabled medical devices.

The letter emphasized that while AI shows promise in areas such as imaging, clinical documentation and scheduling, guardrails are needed to ensure patient safety and appropriate oversight as adoption expands.

https://www.beckershospitalreview.com/healthcare-information-technology/ai/aha-urges-hhs-to-align-ai-rules-with-existing-healthcare-regulations/

'HHS launches $500K challenge to turn EHR data into clinical insights'

 HHS’ Assistant Secretary for Technology Policy is offering $500,000 for health IT developers to turn raw EHR data into actionable insights for clinicians and patients.

The EHIgnite Challenge, which launched Feb. 23, will grant $10,000 to each of nine winners for the first phase then prizes for finalists of $250,000, $100,000 and $50,000, with bonus recognition for multi-EHR interoperability, according to a news release shared with Becker’s.

“While health IT developers have been required to export EHI [electronic health information] since December 2023, ‘computable’ doesn’t always mean ‘usable,'” the release stated. “Raw exports are often overwhelming and difficult to integrate. That’s a problem the EHIgnite Challenge is hoping to solve. The challenge seeks solutions that improve the usability of single-patient EHI exports.”

HHS is looking for ideas like interactive patient tools, filtering by clinical domains, and streamlined payer workflows. The agency is hosting a webinar about the challenge March 11.

https://www.beckershospitalreview.com/healthcare-information-technology/ehrs/hhs-launches-500k-challenge-to-turn-ehr-data-into-clinical-insights/

Musk's xAI, Pentagon said to have inked deal on Grok use

 Elon Musk's xAI and the Pentagon have signed a deal allowing the military to use the company's artificial intelligence (AI) model Grok in classified systems, Axios reported, citing a defense official.

https://breakingthenews.net/Article/Musk's-xAI-Pentagon-said-to-have-inked-deal-on-Grok-use/65728812

'AI Beats Human Research Teams At Crunching Medical Data'

 Whether you think AI is on the cusp of replacing millions of jobs, or an overblown Google search designed to agree with you, one thing is sure: people whose job it is to analyze complex medical data might want to pay attention...

For years, biomedical research has had a problem: too much data, not enough people who know how to wrangle it - or simply that it took months to do so. Modern health studies generate oceans of molecular information - gene expression, DNA methylation, microbiome profiles. Turning that into useful predictions about disease risk or pregnancy outcomes typically requires teams of data scientists, months of coding, and endless debugging.

Now, according to a new study in Cell Reports Medicinesome AI systems can do much of that work in minutes - and in at least one case, they did it better than humans.

The Test: AI vs. the Crowd

Researchers at UC San Francisco and Wayne State University took eight large language models - the same class of AI that powers systems like ChatGPT - and dropped them into a serious biomedical competition. The team used data from three previous international DREAM Challenges, where more than 100 research teams had built predictive models tackling reproductive health questions such as:

  • Can you predict gestational age from blood gene expression?

  • Can you estimate the biological age of the placenta from DNA methylation?

  • Can you detect risk of preterm birth from vaginal microbiome data?

So this is modern AI creating modeling code in Python vs. human-coded predictive models, not humans manually processing the data (to be clear). 

One dataset included around 360,000 molecular features. Another required parsing genomic data from public repositories. In the original competitions, human teams spent up to three months developing and tuning their models.

The AI systems were given a carefully written prompt describing the dataset and the task. Then they had to generate executable R or Python code from scratch. Researchers ran that code and measured how well the resulting models performed on unseen test data.

No special hints. No iterative coaching. Just one shot.

The Results: Faster, Sometimes Better

Four of the eight AI systems successfully generated working code and usable prediction models.

One of them - OpenAI’s o3-mini-high - completed nearly all the tasks and scored the highest overall.

But here’s the part that surprised even the researchers: on the placental aging task, one AI-generated model outperformed the top human team from the original challenge. The difference was statistically significant.

In other words, the AI built a more accurate predictor of placental gestational age than the best human competitors had.

And it generated the code in seconds to minutes.

By contrast, the human teams had months to refine their approaches. Some built complex multi-stage random forest systems and leveraged additional clinical information. The AI, using a relatively straightforward ridge regression model, still won.

Across the other tasks, AI models generally matched the median performance of human participants - solidly competitive, though not always beating the top experts.

Why This Matters

Preterm birth affects roughly 11 percent of infants worldwide and remains a leading cause of neonatal mortality. Clinicians still lack reliable predictive tools for many pregnancy complications.

Better models could mean; earlier identification of at-risk pregnancies, more precise timing of interventions, and reduced long-term complications for children - among other things. But building those models is slow. - requiring extensive writing, debugging, and standardizing analysis pipelines.

And this is where the LLMs kick ass - given that they're especially strong at generating structured, reproducible workflows: loading data, splitting training and test sets properly, fitting models, calculating performance metrics, and even producing plots. Notably, none of the successful AI systems accidentally “leaked” test data into training - a surprisingly common human mistake that can inflate results.

That said, AI is still in its infancy and it wasn't all a slam dunk. In fact, half of the tested models failed outright - often due to basic coding issues like referencing nonexistent packages or mishandling data formats. R code proved more reliable than Python in this setting.

Even the top models were stochastic: run the same prompt multiple times, and you might get slightly different modeling strategies or results.

And there’s a deeper concern. If many researchers rely on similar AI systems, they may converge on similar modeling approaches. That standardization could improve reproducibility - but it might also reduce methodological creativity.

Where is this Going?

Large language models are already showing promise in reading medical records, generating radiology reports, and assisting in pathology analysis. What’s new here is that they’re moving beyond language tasks into hands-on data science, writing actual code. 

The authors emphasize that human oversight remains critical. AI models can hallucinate, misunderstand instructions, or silently make errors. Advanced API-based systems also come with cost and privacy considerations, particularly in clinical contexts.

The question is; will AI in 1, 3, 5 years from now be error free? No hallucinations and generally considered reliable? 

h/t Capital.news

https://www.zerohedge.com/medical/ai-beats-human-research-teams-crunching-medical-data