Search This Blog

Friday, October 3, 2025

Humana 2-day rally



Stock Price Jump (Oct 3, 2025): Humana (NYSE:HUM) was trading around $272.05 midday (Oct 3), up over +6.0% on the day [1]. This follows gains on Oct 2, 2025 and reflects a sharp rebound from last week’s pullback.
Medicare Advantage Update: The stock rally was driven by a positive surprise in Medicare Advantage “star” ratings for 2026. Humana revealed that ~20% of its members are in 4+‑star plans (vs. a much lower base), and 14% in 4.5‑star plans (up from 3% in 2025) [2]. Investors cheered this news – Humana shares jumped ~3% on Oct 2 following the announcement [3] and added further on Oct 3.
Shift in Medicare Coverage: Recently, Humana said it will trim its Medicare Advantage footprint (plans in 46 states vs 48 this year, cutting coverage in about 85% of U.S. counties) as the insurer exits less‑profitable markets amid rising costs [4] [5]. This sector-wide pullback (also announced by CVS/Aetna and UnitedHealth) reflects soaring medical costs and government cuts [6] [7].
Financials & Guidance: In Q2 2025, Humana reported $32.39 billion in revenue (+9.6% YoY) and GAAP EPS of $4.51 (Adj EPS $6.27) [8]. The company raised full-year 2025 guidance – now expecting ~$17.00 in adjusted EPS and at least $128 billion in revenue [9]. GAAP EPS guidance was trimmed (to ~$13.77) due to one‑time adjustments [10]. Key metrics: Market cap ~$29.7 billion, forward P/E ~14.3, and dividend yield ~1.4% [11].
Analyst Outlook: Wall Street’s consensus on HUM is cautiously neutral. 21 analysts average a “Hold” rating, with a 12‑month price target around $289 [12]. Price targets range widely ($245 – $344) [13]. Notably, Barclays cut its target to $245 on Oct 3, 2025 [14]. Conversely, some bullish analysts recently raised targets (e.g. Bernstein to $341 on Sept 5).
Market Sentiment: The broader market rallied on Oct 3 amid Fed rate‑cut optimism and U.S. “shutdown” news [15] [16]. Healthcare stocks led gains (+1.3% on the S&P), with Humana among the top performers (+5.6%) [17]. Investors appear to be rewarding the Medicare updates and improved guidance, though volatility remains high (14 moves >5% in past year [18]).

As of midday Oct 3, Humana shares were $272.05, up about +6.01% on the day [19]. Yesterday’s close (Oct 2) was $256.62 [20], so HUM has recovered much of last week’s slide. In fact, Oct 1 saw a ~5% drop (on news of scaling back Medicare Advantage plans) [21], but Oct 2’s star rating news spurred a ~3% gain [22]. By Friday morning, industry data show HUM stock as one of the S&P 500’s top gainers (+5.6%) [23]. Year‑to‑date Humana is roughly flat, trading ~19% below its 52-week high (~$312 in Sept 2025) [24].

AI Eroding Cognitive Skills in Doctors: How Bad Is It?

 2025 brought a strange convergence: College essays and colonoscopies both demonstrated what can happen when artificial intelligence (AI) leads the work.

First came the college data: An MIT team reported in June that when students used ChatGPT to write essays, they incurred cognitive debt and “users consistently underperformed at neural, linguistic, and behavioral levels” causing a “likely decrease in learning skills.” 

Then came the clinical echo. In a prospective study from Poland published last month in The Lancet Gastroenterology and Hepatology, gastroenterologists who’d grown accustomed to an AI-assisted colonoscopy system appeared to be about 20% worse at spotting polyps and other abnormalities when they subsequently worked on their own. Over just 6 months, the authors observed that clinicians became “less motivated, less focused, and less responsible when making cognitive decisions without AI assistance.”

For medicine, that mix sparks some uncomfortable questions. 

What happens to a doctor’s mind when there’s always a recommendation engine sitting between thought and action? How quickly do habits of attention fade when the machine is doing the prereading, the sorting, even the first stab at a diagnosis? Is this just a temporary setback while we get used to the tools, or is it the start of a deeper shift in what doctors do?

Like a lot of things AI-related, the answers depend on who you ask.

A Coin With Many Sides

On the surface, any kind of cognitive erosion in physicians because of AI use is alarming. It suggests some disengagement with tasks on a fundamental level and even automation bias— over-reliance on machine systems without even knowing you’re doing it.

Or does it? The study data “seems to run counter to what we often see,” argues Charlotte Blease, PhD, an associate professor at Uppsala University, Sweden, and author of Dr. Bot: Why Doctors Can Fail Us―and How AI Could Save Lives. “Most research shows doctors are algorithmically averse. They tend to hold their noses at AI outputs and override them, even when the AI is more accurate.”

If clinicians aren’t defaulting to blind trust, why did performance sag when the AI was removed? One possibility is that attitudes and habits change with sustained exposure. “We may start to see a shift in some domains, where doctors do begin to defer to AI,” she says. “And that might not be a bad thing. If the technology is consistently better at a narrow technical task, then leaning on it could be desirable.” The key, in her view, is the “judicious sweet-spot in critical engagement.”

And the social optics can cut the other way. A recent Johns Hopkins Carey Business School randomized experiment with 276 practicing clinicians found that physicians who mainly relied on generative AI for decisions incurred a “competence penalty” in colleagues’ eyes. They were viewed as less capable than peers who didn’t use AI, with only partial relief when AI was framed as a second opinion.

If you accept the AI, you risk status; if you override it, you risk accuracy. That’s a design and governance problem, not just an attitude problem.

‘Erosion’ Depends on the Task

Nigam Shah, MBBS, PhD, professor of medicine at Stanford University, California, and chief data scientist for Stanford Health Care, argues the starting point is wrong. 

“The question to ask is how faithfully can AI tools take work off completely from my plate,” he says. “For example, when I trained, we used to count white blood cells manually in a microscope using a cell counter. Today we use a cell sorter. We do not ask whether a clinician’s skill in doing the differential cell count manually has atrophied.”

In other words, not every task deserves equal worry about erosion. Shah recommends avoiding diagnosis as the first frontier. “There is so much low-hanging fruit of mind-numbing drudgery to fix. Why do we want to go straight to the hardest tasks, like diagnosis acumen and treatment planning, for which we spend 10 years training the physician?”

Ethan Goh, MD, executive director of Stanford ARISE (AI Research and Science Evaluation), agrees that reframing the job list changes the stakes. But he also insists that “different” doesn’t have to mean “diminished,” which “implies that a doctor’s cognitive skills will deteriorate,” he says. The route to “different” starts with explicit task mapping. 

“For example, once we start mapping out different ways in which doctors are using AI — this is an ongoing study direction at Stanford and Harvard for which ARISE won a Macy’s AI in Education grant— we can start measuring the performance of AI alone versus AI with doctors for each of these tasks,” he says.

Then you assign attention with purpose. “We can make an educated decision on which tasks to apply doctors’ limited cognitive efforts on and which tasks we necessarily have to keep training and testing medical students, residents, and doctors on, so that their cognitive skills in these areas do not deteriorate,” Goh continues. 

He uses aviation as an analogy, pointing out that autopilot has made flying safer, “but pilots still log manual ‘stick-and-rudder’ time in simulators, practice failure modes, and undergo regular proficiency checks,” Goh says.

Where Skills Start to Slip

Goh argues it’s not that doctors’ skills simply vanish; the real challenge is figuring out which ones must be protected. “Robots and automated procedures are still quite some time away compared to knowledge work,” he says.

He does flag a near-term cognitive risk. “As AI becomes so good, or more than 99% accurate, human experts defer to the AI so much and become susceptible to automation bias and anchoring bias,” he says. That’s not a reason to stop; it’s a reason to shape how and when assistance appears.

If adoption is outpacing preparation, that’s a leadership and education gap to close, says Bertalan Meskó, PhD, director of The Medical Futurist Institute, Budapest, Hungary. “AI, and especially those developing AI, don’t care about physicians’ skills but focus on replacing data-based and repetitive tasks to reduce the burden on medical professionals,” he says. “It would be the responsibility of those designing medical curricula to make sure that while physicians learn to use a range of AI-based technology, their skills and understanding will not erode.”

The colonoscopy result underscores the urgency, and this isn’t just a framing dispute. 

“The use of stethoscopes has led to much higher levels of confidence and efficiency for medical professionals in diagnosing heart and lung conditions,” Meskó says. “The use of AI should lead to the same. What is already clear now is that simply implementing AI into medical workflows will not support that vision because AI is a much more complicated and much less intuitive technology than a stethoscope. It requires a whole new level of knowledge, skills, and a mindset to make it work in our favor.”

If you want a map of where skills wobble first, Chiara Natali, a PhD candidate at the University of Milan-Bicocca, has one. In a recent review, she and her colleagues found that AI threatens some of the most central parts of clinical practice: the hands-on skill of examining patients, the ability to communicate clearly and manage their concerns, the craft of building a differential diagnosis, and the broader judgment that ties it all together.

She also pointed to two vulnerabilities that cut across all those areas. One is uniquely tied to AI: the risk that clinicians either over-trust or reflexively dismiss algorithmic advice. The other is more collective: As teams lean on machines, they risk losing shared awareness, making it harder to spot errors or back each other up when skills start to fade.

“AI doesn’t just extend what clinicians can see (the sensory layer); it also shapes what they are inclined to decide (the decision layer),” Natali says. Tools that “rank differentials, suggest next steps, or pre-fill reports” can nudge deference and “risk eroding the meta-skill of judging when not to follow a recommendation.” 

Can we get lost skills back? “Principles of neuroplasticity suggest ‘use it or lose it’,” Natali says. “And conversely, use it deliberately to regain it.” 

Goh’s hypothesis is that it has less to do with training and education and more to do with “thoughtful design of human computer interactions,” he says. “How do we design an AI product or clinical decision support that fits into the doctor’s existing workflow? How can we introduce visual and other cues that alert a doctor when necessary?” 

He points to practical patterns — triage queues for radiology; traffic-light “safety net” alerts; AI-drafted eConsults that keep the primary care provider (PCP) in charge. “This means that the PCP is still in control, while benefiting from education and awareness about actions he could take sooner on behalf of the patient.”

The Future: Inevitabilities Both Good and Bad

The longer horizon returns to identity. Blease is unapologetically direct. “I believe that over time, some degree of deskilling is inevitable, and that’s not necessarily a bad thing,” she says. “If AI becomes consistently better at certain technical skills, insisting that doctors keep doing those tasks just to preserve expertise could actually harm patients.” 

She warns against a double standard that “holds AI to a far higher standard than we hold human doctors.” And she asks the question most institutions dodge. “We need to start thinking about a post-doctor world. We will need a variety of healthcare professionals who are AI-informed. A wide variety of new roles will emerge including in the training and the testing and ethical oversight of these tools, in curating data sets, and in working alongside these tools, to deliver care and improve patient outcomes.” 

Nearer term, she wants “clear guidance and short, practical training for clinicians,” including “the ethical dimensions of AI: bias, transparency, privacy.”

So where does that leave a practicing internist, surgeon, or gastroenterologist who will be urged (and maybe even compelled) to use more AI over the next 5 years? 

The experts don’t always see eye to eye, but their advice points in a similar direction. Pick the right jobs for the machine first. Use AI ruthlessly to strip administrative drag so scarce human attention can be spent where it matters. Measure reliance and performance; sequence assistance after initial effort; protect deliberate practice without the tool. Expect skills to be redistributed and plan for that in curricula, credentialing, and team design. Design products that keep clinicians in the loop at the right moments and explainability front-and-center. Teach clinicians how to tell patients when they trusted the machine and when they did not.

The best outcome is that AI reshapes medicine on purpose: We choose the tasks it should own, we measure when it helps or harms, and we train clinicians to stay exquisitely human while the machines do scalable pattern work. In that future, clinical judgment is less displaced than redeployed, with physicians spending fewer hours wrestling software and more time making sense of people.

https://www.medscape.com/viewarticle/ai-eroding-cognitive-skills-doctors-how-bad-it-2025a1000q2k

AbbVie trims annual profit forecast after expected $2.7 billion R&D hit

 AbbVie said on Friday it has lowered its annual profit forecast, after a flagging an expected $2.7 billion charge related to in-process research and development (IPR&D) expenses in the third quarter.

Shares of the North Chicago-based company were down nearly 1% at $232.0 in extended trade.

AbbVie said in a regulatory filing that such expenses may arise from collaborations, licensing deals or asset buys, but are not forecast due to uncertainty around timing and occurrence. It did not specify how the expense was incurred.

Including the third-quarter charge, AbbVie now expects full-year adjusted earnings per share between $10.38 and $10.58, compared with the prior range of $11.88 to $12.08.

Analysts were expecting full-year adjusted EPS to be $12.02, according to data compiled by LSEG.

The company's previous forecast for full-year adjusted earnings, issued on July 31, excluded any IPR&D expenses beyond the second quarter, it said.

AbbVie added that results for the quarter ended Sept. 30 have not been finalized and are subject to its financial statement closing procedures.

"There can be no assurance that our final results will not differ from these preliminary estimates," the company said.

It forecast third-quarter adjusted EPS in the range of $1.74 to $1.78, including the impact of the IPR&D expense, much lower than the analysts' estimate of $3.27.

Separately, AbbVie said earlier this week it started building a new active pharmaceutical ingredient manufacturing plant in North Chicago, Illinois. The $195 million plant is expected to produce medicines in immunology, oncology, and neuroscience, and be fully operational by 2027.

AbbVie has been leaning on newer immunology drugs Skyrizi and Rinvoq to offset declining sales of its blockbuster arthritis treatment Humira, which began facing biosimilar competition in the U.S. in 2023. The company has spent more than $20 billion on acquisitions since then to bolster its pipeline.

https://finance.yahoo.com/news/abbvie-trims-annual-profit-forecast-212521903.html

Shooting in France's Nice leaves 2 dead, 5 injured

 A shooting in the Les Moulins neighborhood near Nice, France, resulted in two fatalities and five injuries on Friday, according to the Alpes-Maritimes prefecture.

"Laurent HOTTIAUX, Prefect of the Alpes-Maritimes, is on site and strongly condemns this heinous act. On the decision of the Minister of the Interior, reinforcements will be mobilized starting tomorrow to ensure a return to security in the neighborhood and will remain as long as necessary," the prefecture wrote in X.

Mayor Christian Estrosi added that an incident involving a Kalashnikov during a gang incursion linked to narco-banditry in the Moulins neighborhood resulted in the casualties.

https://breakingthenews.net/Article/Shooting-in-France's-Nice-leaves-2-dead-5-injured/64925073

IAEA, Moscow, Kiev talk Zaporizhzhia power restart

 International Atomic Energy Agency (IAEA) Director General Rafael Grossi shared on Friday that he is currently discussing proposals with both Russia and Ukraine regarding the restoration of off-site power at the Zaporizhzhia Nuclear Power Plant (ZNPP).

"Both sides say they stand ready to conduct the necessary repairs on their respective sides of the frontline. But for this to happen, the security situation on the ground must improve so that the technicians can carry out their vital work without endangering their lives," Grossi stated.

The nuclear facility has been using emergency backup electricity for the past 10 days. "For now, the site's emergency diesel generators are functioning without problems, and there is also plenty of fuel in reserve," Grossi said, but warned that such an "unprecedented situation" must be resolved as soon as possible.

https://breakingthenews.net/Article/IAEA-Moscow-Kiev-talk-Zaporizhzhia-power-restart/64924701

Hamas agrees to release all hostages, enter talks



Hamas issued a statement on Friday confirming that it has submitted its response to United States President Donald Trump's 20-point plan on ending the Gaza war and stressing that it agreed to free all hostages.

The Palestinian military group noted it made the decision "after extensive study" in an effort to "achieve a halt to the war and a complete withdrawal from the Strip." Hamas explained that it has agreed to release all living hostages, as well as the bodies of the deceased ones, and affirmed its "readiness to enter immediately, through mediators, into negotiations to discuss the details of that."

"The Movement also renews its approval to hand over the administration of the Gaza Strip to a Palestinian body of independents (technocrats) based on Palestinian national consensus and based on Arab and Islamic support," Hamas stated, adding that other parts of Trump's plan regarding Gaza's future are "linked to a comprehensive national position."

New York Targets Bitcoin Mining With Proposed Tax Hike Bill

 by Frank Corva via BitcoinMagazine.com,

Yesterday, two members of the New York State (NYS) Senate introduced Senate Bill 8518 (S8518), which imposes excise taxes on digital asset mining using the proof-of-work consensus mechanism, making it even more difficult than it already is for bitcoin miners to operate in the state.

S8518, which was co-sponsored by Liz Krueger (D) and Andrew Gounardes (D), stipulates that bitcoin and digital asset miners in the state will pay increased taxes based on the amount of energy that they use.

The rates are as follows:

  • 0 cents per kilowatt-hour (kWh) for every kWh less than or equal to 2.25 million kWh per year

  • 2 cents per kWh for every kWh between 2.25 million and 5 million kWh per year

  • 3 cents per kWh for every kWh between 5 million and 10 million kWh per year

  • 4 cents per kWh for every kWh between 10 million and 20 million kWh per year

  • 5 cents per kWh for every kWh over 20 million kWh per year

The proposed taxes will not apply to miners who utilize renewable energy sources, as defined by Section 66-P of NYS public service law, to power their facilities. The mining facility would also have to “not [be] operated in conjunction with an electric corporation’s transmission and distribution facilities,” according to the bill.

The bill also stipulates that all taxes, interest, and penalties collected as a result of this potential law be used to subsidize energy customers enrolled in NYS energy affordability programs.

The introduction of this bill comes approximately one year after NYS’ digital asset mining moratorium expired. The moratorium banned any digital asset mining that required the use of fossil fuels.

Now that bitcoin mining companies can technically operate in the state again, they will likely think twice about doing so, as the increased taxes will likely cause these companies to look to set up facilities elsewhere in the U.S..

This new bill is just another in a series of bad regulatory proposals from Democratic lawmakers and bureaucrats in NYS that disincentivize the Bitcoin and crypto companies from setting up in NYS.

Instead of thinking about the jobs that the bitcoin mining industry could bring to upstate New York, home to a number of cities and regions that suffer from poverty in this post-industrial era, Democrats seem more hellbent on sticking it to bitcoin miners.

https://www.zerohedge.com/crypto/new-york-targets-bitcoin-mining-proposed-tax-hike-bill