Search This Blog

Thursday, January 1, 2026

Wash. State AG Warns Citizen Journos To Stop Probing Somali Daycares Or Face Potential Hate Crime Charges

 by Debra Heine via American Greatness,

The Washington state attorney general released a statement on X Tuesday evening warning independent journalists to stop investigating fraudulent Somali daycare centers or they could be charged with a hate crime.

“My office has received outreach from members of the Somali community after reports of home-based daycare providers being harassed and accused of fraud with little to no fact-checking,” State AG Nick Brown stated.

“We are in touch with the state Department of Children, Youth, and Families regarding the claims being pushed online and the harassment reported by daycare providers. Showing up on someone’s porch, threatening, or harassing them isn’t an investigation. Neither is filming minors who may be in the home. This is unsafe and potentially dangerous behavior.

Harmeet Dhillon, the Assistant Attorney General for Civil rights, issued a warning of her own in reaction to the Washington state AG’s post.

“ANY state official who chills or threatens to chill a journalist’s 1A rights will have some ‘splainin to do,” she wrote on X, Wednesday morning.

“[The DOJ Civil Rights Division] takes potential violations of 18 USC § 242 seriously!” Dhillon added.

This statute, known as the Deprivation of Rights Under Color of Law, makes it a crime for any person acting under the pretense of law to willfully deprive another individual of rights, privileges, or immunities secured by the Constitution or laws of the United States.

The clash of the AGs came after Youtuber Nick Shirley exposed about a dozen Somali-owned, state-funded childcare facilities in Minneapolis, Minnesota, that appeared to be completely deserted.

Shirley produced a 42-minute video, which has been viewed over 131 million times on X since it was posted on December 26,  alleging that Minnesota governor Tim Walz (D.) “knew about the fraud but never reported it.”

Inspired by Shirley’s bombshell report, citizen journalists in multiple states with large Somali populations have launched their own investigations in recent days.

In the Kent, Washington area Tuesday, YouTuber Chris Sims, a self-described “gonzo journalist,” visited seven suspicious Somali childcare sites and reported that they were “very unhappy” to see him.

Sims posted a video of him approaching a private home listed as a childcare facility that appeared to be not as advertised.

“There was no sign of kids or being a Daycare facility,” Sims wrote.

“I was told by a few they weren’t Daycares despite receiving tax payer dollars. One yelled ‘Call the police’ behind the door.”

On Monday, independent journalists Jonathan Choe and Cam Higby visited an alleged Somali daycare facility in Seattle that receives hundreds of thousands in taxpayers funds and the person who answered the door said there was no daycare there in the past or present.

Higby said “Dhagash Childcare” has received over $210,000 just this year alone.

Another listed childcare facility, a house in a residential neighborhood in Kent, Washington, has received over $863,000 since 2023, according to Higby.

“Residents say there IS NO DAYCARE HERE,” the journalist said.

Another reporter reporting on potential fraud in the Rainier Vista neighborhood of Seattle on December 29th, faced hostile reactions from the Somali residents, who called the police on him.

In his statement, the Washington State AG encouraged members of the Somali community “experiencing threats or harassment” to call the police or his office’s Hate Crimes & Bias Incident Hotline or report it to the state’s hate crime website.

Addressing the independent journalists, Brown added: “If you think fraud is happening, there are appropriate measures to report and investigate. Go to DCYF’s website to learn more. And where fraud is substantiated and verified by law enforcement and regulatory agencies, people should be held accountable.”

The Post Millennial’s Andy Ngo responded to Brown’s threat on X, saying: “It is the duty of journalists to visit taxpayer-funded nonprofits and businesses to investigate where you have failed. The journalists have documented their visits on camera and there is no harassing or threatening behavior. You are trying to threaten journalists by telling people to call police with false allegations of a hate crime.”

https://www.zerohedge.com/political/washington-state-ag-warns-citizen-journalists-stop-investigating-somali-daycares-or-face

OpenAI said to launch audio model for new device

 OpenAI Inc. will release a new audio model at the beginning of the year in connection with its upcoming standalone audio device, The Information reported on Thursday, citing a person with knowledge on the matter.

According to the report, the new audio model architecture will include more natural and emotional speech, more in-depth answers and the possibility to speak at the same time with the human user. The new model is also said to handle interruptions better.

The company's spokesperson declined to comment on the report.

https://breakingthenews.net/Article/OpenAI-said-to-launch-audio-model-for-new-device/65415902

'Ukraine's Umerov: Turkey one of key dialogue platforms'

 Ukraine's Secretary of the National Security and Defense Council Rustem Umerov stated on Thursday that Turkey is an important partner to Ukraine and "one of the key platforms for dialogue."

Reporting on his Ankara meeting with Turkish Foreign Minister Hakan Fidan, Umerov revealed that security situation, the negotiation process and "coordination of further steps" were discussed. Special emphasis was given to humanitarian issues and returning captured Ukrainians.

Umerov, who has recently held several rounds of talks with the United States in Miami, noted that he briefed Ukrainian President Volodymyr Zelensky after the meeting.

https://breakingthenews.net/Article/Umerov:-Turkey-one-of-key-dialogue-platforms/65415916

GLP-1 psychological side effects: a psychiatrist’s view

 Farid Sabet-Sharghi, MD

As a physician-psychiatrist, I have watched the rise of GLP-1 medications (Ozempic, Wegovy, Mounjaro, Zepbound) with genuine admiration. They are transforming metabolic health, reducing cardiovascular risk, and offering hope to patients who have struggled for decades. For many, these medications are lifesaving.

But alongside the excitement, I’m witnessing something rarely discussed: a change in personality and affect, especially at higher doses.

This pattern reminds me of the early days of SSRIs. When fluoxetine was introduced, it was hailed as a medication that could reduce neuroticism and even “improve personality.” Only later did we fully recognize the trade-offs: emotional blunting, loss of motivation, reduced libido, and a subtle flattening of the inner emotional landscape.

Today, with GLP-1s, I’m observing a similar phenomenon. Patients lose weight (often dramatically) but report feeling “less alive.” Many describe diminished desire, reduced spontaneity, and a quieting of the internal drive that motivates daily life. Some also experience muscle wasting, which contributes to fatigue and further reduces their sense of vitality. What begins as decreased appetite sometimes generalizes into decreased enthusiasm for socializing, intimacy, or creative pursuits.

Among younger patients with eating disorders, I see another trend: Many love GLP-1s because they say the constant “food noise” in their minds has finally stopped. While the relief is understandable, the total silencing of this internal signal is not always a positive development; it can reinforce avoidance patterns and deepen the psychological roots of disordered eating.

Meanwhile, physicians are increasingly repurposing GLP-1s for addictions, compulsive behaviors, and even mood disorders, often without long-term psychiatric data. A medication that dampens cravings can seem appealing, but a medication that dampens all desire may come at a cost.

The core issue is simple: GLP-1s are not psychologically neutral. They affect appetite for food, but also appetite for life, making them, in practice, psychotropic medications. This does not diminish their remarkable benefits. But breakthroughs require vigilance. We must monitor not only weight and metabolic markers, but also joy, motivation, and emotional well-being.

If the SSRI era taught us anything, it is that early enthusiasm must be matched with long-term honesty. GLP-1s are powerful tools. Our responsibility is to use them with balance, humility, and an awareness of the whole person, not just their weight.

Farid Sabet-Sharghi is a psychiatrist.

https://kevinmd.com/2025/12/glp-1-psychological-side-effects-a-psychiatrists-view.html

'Stimulant medications affect arousal and reward, not attention networks'

 1,2,21 benjamin.kay@wustl.edu ∙ 3 ∙ 4 ∙ … ∙ 3,15,17 ∙ 12,13,18 ∙ 1,2,3,16,17,19,20 

Highlights

Stimulants altered functional connectivity in action regions consistent with arousal
Stimulants altered functional connectivity in salience regions consistent with reward
Stimulants did not affect canonical attention networks
Stimulants reversed the behavioral and brain effects of sleeping less

Summary

Prescription stimulants (e.g., methylphenidate) are thought to improve attention, but evidence from prior fMRI studies is conflicted. We utilized resting-state fMRI data from the Adolescent Brain Cognitive Development Study (n = 11,875; 8–11 years old) and validated the functional connectivity findings in a precision imaging drug trial with highly sampled (n = 5, 165–210 min each) healthy adults (methylphenidate 40 mg). Stimulant-related connectivity differences in sensorimotor regions matched fMRI patterns of daytime arousal, sleeping longer at night, and norepinephrine transporter expression. Taking stimulants reversed the effects of sleep deprivation on connectivity and school grades. Connectivity was also changed in salience and parietal memory networks, which are important for dopamine-mediated, reward-motivated learning, but not the brain’s attention systems (e.g., dorsal attention network). The combined noradrenergic and dopaminergic effects of stimulants may drive brain organization towards a more wakeful and rewarded configuration, improving task effort and persistence without effects on attention networks.

'Should The FDA Require “Clinical Licensure” of AI Tools For Doctors?'

 Any doctor knows that the road to clinical licensure is long and winding. After medical school, it requires internship, residency, fellowships, written exams, and continuing education credits. 

Internist and researcher Eric Bressman, MD, MHSP, assistant professor at the University of Pennsylvania Perelman School of Medicine, is deeply familiar with this process. 

It’s why Bressman and his colleagues proposed, in a recent JAMA Internal Medicine articlecreating a pathway similar to physician clinical licensure to help regulate artificial intelligence (AI) in medicine.

As with physicians, AI’s “road to licensure” would include rigorous training before release, as well as supervision while in use.

“These AI tools are being used actually pretty widely, but at the same time, they are sort of skirting any actual regulatory oversight,” says Bressman. “This doesn’t seem like a sustainable, long-term solution.” 

AI “already has so much potential, both for benefit and harm. This piece really seemed to have a unique and new proposal for how to ensure the most effective, safest use of AI in medicine,” says Eve Rittenberg, MD, a primary care physician at Mass General Brigham, Harvard Medical School, Boston, who co-authored an accompanying editorial on Bressman’s proposal.

Bressman said the proposed framework is not a step-by-step guide for regulating AI but represents an alternative to current FDA approval pathways.

Keeping Pace With Tech Advancements 

The FDA has historically regulated medical devices, from pacemakers to power wheelchairs, alongside prescription drugs. As computing costs decreased, a growing number of these devices began to include digital components. 

Standalone apps and software designed to help interpret test results (such as ECG traces) or provide instructions during CPR resuscitation also began to crop up with increasing frequency. 

So the FDA added a regulatory pathway known as Software as a Medical Device (SaMD). This pathway ranked software products by potential risk to patients and added validation and software update requirements.

AI-enabled devices, such as software to assist with reading radiology images or interpreting ECGs, had historically been pigeonholed into the SaMD pathway, Bressman said. And for those products, the strategy worked reasonably well. They were developed for specific uses and did not rely on answers created by generative AI.

Newer AI and large language models (LLMs), however, could have a broad range of uses. Some, like Open Evidence, generate answers for clinicians and create handouts tailored for specific patient concerns. Ambient AI scribes can take notes during patient encounters, facilitating billing and documentation. 

Rittenberg, who uses multiple AI platforms in her own clinical practice, said the ambient scribe in particular has changed her life. It allows her to focus her attention on the patient and their care rather than her notes. With the AI absorbing some of the charting burden, she can leave her office on time almost every day, she said.

While Rittenberg’s experiences have been positive, newer AI tools for providers may pose greater risks for patients and providers. Feeding patient information to an AI agent or chatbot could potentially jeopardize privacy and lead to incorrect diagnoses. 

Updates could subtly shift the AI’s responses away from the specific task for which it was approved. Biases embedded in algorithms could exacerbate existing inequalities. And not knowing how an AI is making a decision could create accountability concerns if something goes wrong.

“This is new. We’ve never had this issue before, and measuring it is complex,” says Majid Afshar, MD, a critical care and digital health physician in the Department of Medicine at the University of Wisconsin, Madison. “We’re still figuring out the governance and metrics to use.”

Of course, these types of errors happen all the time in human-only systems, which means that holding AI to some abstract standard of perfection may hamstring its usefulness, said Liam McCoy, MD MSc, a neurology resident and AI ethicist at the University of Alberta, Canada.

“Lower Risk” AI Products Regulated Differently 

That’s why the 2016 21st Century Cures Act created exemptions for five different categories of software that were considered low risk and did not need to be regulated as a medical device. 

One of those categories included clinical decision support software intended to help physicians be more certain about their diagnoses and therapy choices but not independently provide a diagnosis. The idea was to balance innovation with regulatory safeguards by exempting low-risk platforms from the most stringent oversight, Bressman says. 

But nearly all AI platforms classify themselves as clinical decision support programs, which potentially leaves patients unprotected and at risk.

“We have a legislative process that takes time. It’s a long, deliberative process, and we have a technology that’s moving very quickly, in a health system that’s moving very quickly,” McCoy says.

To fill this gap, Bressman proposed a pathway not unlike clinical licensure for human physicians to be used in AI. Training the models is equivalent to medical school, and deploying the tools under close supervision would serve as internship and residency. This would allow LLMs and other AI models to continue to perfect skills before being turned loose upon patients. Passing certain exams and requiring ongoing clinical education to ensure the models don’t stray from their intended purposes round out the package.

“This is an ambitious proposal that will face many challenges and some resistance, and there are a lot of details to figure out,” Bressman says. “Perhaps the most important thing is having some more robust measure of oversight after you sort of let it out there, which is, I’d say, not the strength of the FDA.”

The FDA did not immediately respond to Medscape’s request for comment.

McCoy found the idea interesting but pointed out several caveats. 

While AI tool can provide results that look like human responses, he said, their underlying programming means that they actually think nothing like a person. This makes it important to develop appropriate tests for AI rather than merely adapting existing board certification material for tests. 

Evaluation of these models will also have to account for what McCoy calls their “fragile frontier” — how a model can perform exceedingly well at one test but fail abysmally at another.

Rather than expecting one regulation to be sufficient for all medical AI tools, McCoy says it’s likely that safeguards will accumulate piecemeal. Others will arrive due to malpractice lawsuits and other examples of tort law. 

The development of such a strategy, in whatever form it takes, is essential, says Afshar.

“We have to come up with an acceptable framework that can balance but not hamper the speed of innovation, and also move us forward safely and cautiously,” Afshar says.

https://www.medscape.com/viewarticle/should-fda-require-clinical-licensure-ai-tools-doctors-2025a10010t1

Kardashian's Lemme Expands into Walmart

 'Lemme is launching at Walmart, expanding its national footprint and introducing its core wellness lineup to shoppers nationwide. Beginning January 1, Lemme will be available in just over 2,000 Walmart stores nationwide, featuring its consumer-favorite gummies across women's health, gut health, sleep, and metabolic wellness. As wellness routines continue to play a central role in everyday life for women and families, Walmart represents a natural home for Lemme.

'This expansion brings the brand's science-backed gummy supplements to a trusted retail destination customers already rely on for their daily wellness essentials.'

https://www.marketscreener.com/news/lemme-expands-into-walmart-ce7e59d9d98ff521