The U.S. Department of Homeland Security (DHS) said on April 27 that past statements expressing what it labeled extremist views from immigrants applying for green cards and naturalization would warrant closer scrutiny.
The DHS statement was in response to a New York Times report over the weekend that, citing internal DHS training materials, said that under new guidance introduced by the Trump administration, immigrants can now be denied a green card for expressing political opinions.
A spokesman for U.S. Citizenship and Immigration Services (USCIS), which falls under the purview of DHS, said certain behaviors and statements “may raise serious concerns for USCIS personnel reviewing an applicant’s file, including espousing terrorist ideologies, expressing hatred for American values, advocating for the violent overthrow of the United States government, or providing material support to terrorist organizations,” adding that such actions “warrant closer scrutiny.”
The New York Times report claimed that the Trump administration includes criticizing the state of Israel as a potentially disqualifying factor when applying for a green card or naturalization.
White House spokeswoman Abigail Jackson said that the administration’s policies had “nothing to do with free speech” and were meant to protect “American institutions, the safety of citizens, national security and the freedoms of the United States,” the paper reported.
The Epoch Times has contacted the White House and DHS for further comment but did not receive a response by publication time.
The New York Times report prompted criticism from lawmakers and rights groups, who have raised concerns regarding free speech and due process.
Sen. Chris Van Hollen (D-Md.) labeled the alleged instructions to immigration officers as “outrageous” in an April 27 post on X.
“Trump plans to deny legal residency in the US based on whether he agrees with your speech,” Hollen wrote.
“Since when did it become ‘anti-American’ to criticize the actions of a foreign government? Who is he fighting for?”
Nonprofit civil liberties group Defending Rights & Dissent said the move was an “incredibly disturbing attack on free speech, with the government deciding who can enter the country based purely on their expression of political views,” in an April 27 post on X.
The Trump administration has adopted a harsher line on Palestinian advocacy movements it has deemed anti-Semitic by attempting to deport foreign protesters and threatening to freeze funding for universities where protests were held, since Trump retook the White House in 2024.
Last year, the Trump administration said it would vet immigration applications for “anti-Americanism” and anti-Semitism.
DHS stated on April 9, 2025, that USCIS would consider online expressions of anti-Semitic sentiment—particularly those endorsing violence, or terrorist groups such as Hamas, Hezbollah, and the Houthis—as grounds for denying immigration benefit requests.
The new policy, which went into effect immediately, also applies to physical harassment of Jewish individuals and will affect applicants for lawful permanent residency, foreign students, and individuals affiliated with educational institutions linked to anti-Semitic activity.
The policy directs USCIS officers to treat expressions of support for anti-Semitic violence or extremist ideologies as negative discretionary factors when evaluating applications.
Mass shootings, threats against public officials, bombing attempts, and attacks on communities and individuals are an unacceptable and grave reality in today’s world. These incidents are a reminder of how real the threat of violence is—and how quickly violent intent can move from words to action.
People may also bring these moments and feelings into ChatGPT. They may ask questions about the news, try to understand what happened, express fear or anger, or talk about violence in ways that are fictional, historical, political, personal, or potentially dangerous. We work to train ChatGPT to recognize the difference—and to draw lines when a conversation starts to move toward threats, potential harm to others, or real-world planning.
We’re sharing what we do to minimize uses of our services in furtherance of violence or other harm: how our models are trained to respond safely, how our systems detect potential risk of harm, and what actions we take when someone violates our policies. We are constantly improving the steps we take to help protect people and communities, guided by input from psychologists, psychiatrists, civil liberties and law enforcement experts, and others who help us navigate difficult decisions around safety, privacy, and democratized access.
How we mitigate risks of harm in ChatGPT.
Our Model Spec(opens in a new window) lays out our long-standing principles for how we want our models to behave: maximizing helpfulness and user freedom while minimizing the risk of harm through sensible defaults.
We work to train our models to refuse requests for instructions, tactics, or planning that could meaningfully enable violence. At the same time, people may ask neutral questions about violence for factual, historical, educational, or preventive reasons, and we aim to allow those discussions while maintaining clear safety boundaries—for example, by omitting detailed, operational instructions that could facilitate harm. The line between benign and harmful uses can be subtle, so we continually refine our approach and work with experts to help distinguish between safe, bounded responses and actionable steps for carrying out violence or other real-world harm.
As part of this ongoing work, we’ve continued expanding our safeguards to help ChatGPT better recognize subtle signs of risk of harm across different contexts. Some safety risks only become clear over time: a single message may seem harmless on its own, but a broader pattern within a long conversation—or across conversations—can suggest something more concerning. Building on years of work in model training, evaluations and red teaming, and ongoing expert input, we have strengthened how ChatGPT recognizes subtle warning signs across long, high-stakes conversations and carefully responds. We’ll share more about this work in the coming weeks.
Our safety work also extends to situations where users may be in distress or at risk of self-harm. In these moments, our goal is to avoid facilitating harmful acts, and also to help de-escalate the situation and guide people to real-world support. ChatGPT surfaces localized crisis resources, encourages people to reach out to mental health professionals or trusted loved ones, and in the most serious cases directs people to seek emergency help.
How we monitor and enforce our rules.
We assume the best of our users, but when we detect that someone is attempting to use our tools to potentially plan or carry out violence, we take action, including revoking access to OpenAI’s services. Our Usage Policies set clear expectations for acceptable use and that we may prohibit use for threats, intimidation, harassment, terrorism or violence, weapons development, illicit activity, destruction of property or systems, and attempts to circumvent our safeguards. We take those policies seriously and work hard to enforce them.
We use automated detection systems to identify potentially concerning activity at scale. These systems analyze user content and behavior using a range of tools designed to identify signals that may indicate policy violations or harmful activity, including classifiers, reasoning models, hash-matching technologies, blocklists, and other monitoring systems.
When an account or conversation is flagged, it is assessed in context by trained personnel. These human reviewers are trained on our policies and protocols, and operate within established privacy and security safeguards, meaning their access to user information is limited, conducted within secure systems, and subject to confidentiality and data protection requirements. Their role is to assess the flagged activity in context, including the content of the interaction, surrounding conversation, and any relevant patterns of behavior over time. This contextual review is important because automated systems may identify signals of potential concern without fully capturing intent or nuance.
The goal is to determine whether the flagged activity violates our policies and/or indicates that a user may carry out an act of violence, requires escalation for more detailed human review, or can be dismissed or deprioritized as low risk or non-violative. When we determine that a bannable offense has occurred, we aim to immediately revoke access to OpenAI’s services. That may include disabling the account, banning other accounts of the same user, and taking steps to detect and stop the opening of new accounts. We have a zero-tolerance policy for using our tools to assist in committing violence. People can appeal enforcement decisions, and we review those appeals to confirm the outcome.
We surface real-world support and refer to law enforcement when appropriate.
Most enforcement actions, including bans for violence, happen directly between OpenAI and the user, making clear they have crossed a line. But in some sensitive cases, we may contact others who are best positioned to help.
Where we assess that a case presents indicators of potentially serious, real-world harm, it is escalated for a more in-depth investigation, including assessing the overall level of risk using structured criteria. This stage is reserved for a limited subset of cases and is intended to ensure higher-risk scenarios are assessed with additional context and expertise. When conversations indicate an imminent and credible risk of harm to others, we notify law enforcement. Mental health and behavioral experts help us assess difficult cases and our referral criteria is flexible to account for the fact that a user may not explicitly discuss the target, means, and timing of planned violence in a ChatGPT conversation but that there may still be potential risk of imminent and credible violence.
Last Fall, we introduced Parental Controls to help families guide how ChatGPT works in their homes. Parental controls allow parents to link their account with their teen’s account and customize settings for a safe, age-appropriate experience. Parents don’t have access to their teen’s conversations, and in rare cases where our system and trained human reviewers detect possible signs of acute distress, parents may be notified—but only with the information needed to support their teen’s safety. Parents are automatically notified by either email, SMS, push notification, or all three.
Working closely with experts from our Council on Well-Being and AI and our Global Physicians Network, we will also soon be introducing a trusted contact feature, which will allow adult users to designate someone to receive notifications when they may need additional support.
We learn, improve and course-correct.
We continue to strengthen our models, detection methods, review processes, and escalation criteria in response to observed usage, emerging risks, and input from internal and external experts. We are especially focused on hard cases: for example, where it is not clear whether a particular input is legitimate or poses a risk of harm; sophisticated attempts to evade safeguards; or when people repeatedly try to misuse our services. We will continue to prioritize safety while balancing privacy and other civil liberties so we can act on serious risks.
International Atomic Energy Agency chief Rafael Grossi on Tuesday posted on X he held a “focused and timely” meeting with US State Department official Christopher Yeaw on developments in Iran, nuclear safety at Ukraine’s power plants, and broader non-proliferation issues.
Had a focused and timely meeting with 🇺🇸 @StateDept Assistant Secretary @StateACN Christopher Yeaw on the current situation in Iran, on nuclear safety and security at Ukraine’s nuclear power plants and on the importance of the #NPT, particularly in the current context.
Global data center power demand is projected to hit 84 GW by 2027—a 50 percent jump from 2023 levels—with AI workloads accounting for 27 percent of that total, according to Goldman Sachs Research.
The grid cannot keep up with AI. For decades, electricity demand grew slowly and predictably, giving utilities comfortable margins to plan capacity years in advance. That model broke almost overnight. Between 2023 and 2024 alone, utilities’ five-year summer peak demand forecasts jumped from 38 GW to 128 GW, a more than threefold increase in a single planning cycle.
Unlike traditional server loads, which are relatively flat and predictable, AI inference and training jobs generate sharp, near-instantaneous power spikes. Large-scale GPU clusters can produce fluctuations of hundreds of megawatts within seconds. That’s a load behavior utilities have no historical model for.
Energy companies are no longer treating hyperscale data centers as large customers to be served from the grid, but rather as anchor infrastructure to be co-built with.
What follows is a look at what that shift actually demands at the systems level — why natural gas is currently the only tool that can fill the gap at the required speed and scale, what that means for emissions commitments already being made today, and what the longer path to balancing this with storage, transmission, and cleaner alternatives realistically looks like.
Why natural gas is filling the gap today
The US currently generates around 40 percent of its electricity from natural gas, with coal and renewables making up most of the rest. However, neither can meet the requirements of AI data centers, which require firm, uninterrupted, gigawatt-scale power available around the clock. The present US grid is already under strain before data centers even enter the equation.
Renewables hit a hard wall here. Interconnection requests for new solar and wind projects face median wait times of over four years. In contrast, natural gas is cheap, abundant, and already flows through an extensive pipeline network across the country. And unlike new solar or wind projects, gas plants can be up and running in three to five years.
Even so, three to five years is not immediate. Demand is here now, and the gap between what the grid can deliver today and what data centers need is already being felt. Energy companies are trying to figure out how to keep up with this demand in different ways.
Entergy is spending $3.2 billion to build three natural gas plants totalling 2.3 GW specifically to power Meta’s new Louisiana data center, which requires 2 GW for computation alone. These plants carry a typical operational lifetime of around 30 years.
Others are hedging their bets that the infrastructure will attract the tenant. NextEra Energy, the US’s largest renewable developer, is partnering with ExxonMobil to build a 1.2 GW gas plant in the Southeast. CEO John Ketchum summed up the industry’s new posture: the AI sector is shifting toward “BYOG” — build your own generation.
Rethinking the engineering playbook
Power grids are engineered for predictability. Seasonal peaks, industrial cycles, and population growth are modeled to plan generation capacity for the future. Fitting AI into this picture requires much more than just scaling.
Training a large language model means thousands of GPUs running simultaneously, sustaining enormous power draws for days or weeks, then dropping off sharply. These spikes are unpredictable and can be extreme. Dispatch curves determine which plants run when, whereas reserve scheduling ensures backup capacity is always available. AI workloads stress both in ways utilities have no historical model for. The forecasting crisis this has created is visible in the numbers, with a threefold increase in peak demand between 2023 and 2024.
Developers routinely file speculative interconnection requests for projects that never get built, flooding queues with phantom demand. ERCOT, Texas’s grid operator, developed an entirely new Adjusted Large Load Forecast methodology to account for exactly this — the gap between projected data center load and what actually materializes.
At the plant level, this is forcing a redesign of how generation assets are dispatched. When an AI model responds to a user query, it triggers a sudden, large power surge known as an inference spike. Gas peakers — plants designed for short, high-output bursts — are now being co-located with data center campuses specifically to absorb these inference spikes that baseload plants can’t respond to fast enough.
The physical grid is buckling under the same pressure. Transmission investment in many regions of the US declined steadily after 2015, leaving a system already running close to its limits. Now it’s being asked to absorb demand at a scale it was never designed for.
In Texas, CenterPoint Energy reported a 700% increase in large load interconnection requests between late 2023 and late 2024. In Virginia, another 50 GW of data center projects sit active in the queue. The costs reflect the strain.
Combined-cycle gas turbines (CCGTs) capture waste heat to generate additional electricity, making them efficient enough for round-the-clock demand. Installed costs for new CCGTs have nearly doubled to around $2,000/kW compared to plants built just a few years ago.
The market data tells the same story. The capacity market clearing price, which is the rate utilities pay to secure guaranteed power reserves for peak demand, has also increased. In PJM, the grid operator covering much of the Mid-Atlantic and Midwest, capacity market clearing prices for the 2026-27 delivery year jumped to $329/MW — more than ten times the $28.92/MW price from two years prior.
The gas plants being built today aren’t just a bridge to the AI boom; they’re a commitment. With an average operational lifetime of 30 years, they will still be running well past every major net-zero target on the books.
A natural gas plant emits around 490g of CO2 per kilowatt-hour over its lifetime. Scale that across the gigawatts of new capacity being greenlit today, and the emissions math becomes difficult to ignore.
Across the southern US, utilities are planning around 20 GW of new gas capacity over the next 15 years, with data centers accounting for 65 to 85% of projected load growth in Virginia, South Carolina, and Georgia alone. The methane problem compounds this.
Natural gas infrastructure (drilling, pipelines, compression) leaks methane continuously, both accidentally and through intentional venting. Methane traps around 80 times as much heat as CO2 over a 20-year horizon, making the emissions from a buildout of this scale difficult to quantify but impossible to ignore.
It’s the policy fault line that’s now opening up between energy companies, hyperscalers with net-zero commitments, and regulators who are only beginning to grapple with what AI’s energy appetite actually means for decarbonization timelines.
Policies and incentives
Several structural mechanisms are being put in place to eventually shift the balance, though none of them work fast enough to solve the immediate problem.
It’s the policy fault line that’s now opening up between energy companies, hyperscalers with net-zero commitments, and regulators who are only beginning to grapple with what AI’s energy appetite actually means for decarbonization timelines.
Policies and incentives
Several structural mechanisms are being put in place to eventually shift the balance, though none of them work fast enough to solve the immediate problem.
On the storage side, the Inflation Reduction Act of 2022 offers a 30% tax credit for standalone energy storage systems and zero-emission generation facilities placed in service after 2024. The credit applies not just to generation technologies like solar but also to storage infrastructure itself. This gives data center operators and utilities a financial reason to invest in battery systems needed to make renewables work around the clock.
On the generation side, nuclear is emerging as a leading zero-carbon option for AI data centers, given its ability to deliver firm, always-on power. Google is already moving in this direction, striking a deal with NextEra to restart the 615 MW Duane Arnold nuclear facility for 24/7 carbon-free power.
Transmission remains the hardest problem. A study by the Department of Energy identified significant transmission capacity gaps across nearly every US region — gaps that predate the AI demand surge and will take years of coordinated investment and permitting reform to close.
The path forward
AI’s power demands are arriving faster than the infrastructure built to serve them. The gas plants, the transmission upgrades, the storage credits, the nuclear restarts, none of it is moving at the speed the technology is.
At some point, that gap has to close. The question is whether that gap closes through deliberate investment and policy coordination, or through something more painful. Power shortages, delayed data centers, and electricity bills that reflect the true cost of building a grid that wasn’t designed for this moment. Engineers and policymakers are working on the former. The clock is running on the latter.
The Justice Department on Tuesday sued Cloudera Inc., accusing the enterprise data and artificial intelligence company ofdeliberately engineering a hiring process that excluded American workersfrom at least seven lucrative technology positions while the firm pursued permanent residency sponsorship for foreign workers on temporary visas.
In a 14-page complaint filed with the Office of the Chief Administrative Hearing Officer, the department’s Civil Rights Division alleges that Cloudera, from March 31, 2024, through at least January 28, 2025, instructed job candidates to submit applications to a dedicated email address, amerijobpostings@cloudera.com, that rejected all external messages with an automated bounce-back error. The company did not advertise the roles on its public careers website or accept applications through its standard portal, as it did for non-sponsorship positions.
Cloudera then attested to the Department of Labor that it could not locate any qualified U.S. workers for the roles, which paid between approximately $180,000 and $294,000 annually, according to the filing. The positions included a Product Manager role in Santa Clara, California, with a listed salary range of $170,186 to $190,000.
The case marks one of the most detailed enforcement actions under the Justice Department’s Protecting U.S. Workers Initiative, which was relaunched last year and has already produced 10 settlements targeting employers accused of discriminating against American workers in favor of temporary visa holders.
“Employers cannot use the PERM sponsorship process as a backdoor for discriminating against U.S. workers,” Assistant Attorney General Harmeet K. Dhillon of the Civil Rights Division said in a statement. “The Division will not hesitate to sue companies who intentionally deter U.S. workers from applying to American jobs.”
On X, she wrote that the department had sued Cloudera “for discriminating against U.S. workers in favor of foreign visa holders for high-paying tech jobs” and warning employers that they are “on notice.”
A Technical Barrier With Regulatory Consequences
The complaint describes a recruitment system designed to satisfy the letter of permanent labor certification (PERM) rules while subverting their purpose. Under PERM, employers seeking to sponsor foreign workers for green cards must first demonstrate that no minimally qualified, willing, and available U.S. worker exists for the position through good-faith recruitment that mirrors normal hiring practices.
Cloudera posted the seven PERM-related jobs on a state job board, in newspapers, and in professional publications. But it deviated sharply from its standard process by refusing to list the positions on cloudera.com/careers and directing all applicants to the nonfunctional email address.
External candidates received a Google Groups error message stating that the group “may not exist, or you may not have permission to post messages to the group.” For at least nine months, Cloudera recorded no external applications through the address and made no effort to investigate or fix the issue. The company nevertheless certified in its PERM applications - under penalty of perjury - that it had conducted bona fide recruitment and found no qualified U.S. worker. No U.S. workers were hired for any of the seven positions during the relevant period.
One Worker’s Complaint Triggers Investigation
The investigation began after a single U.S. worker - the charging party, whose name is redacted - attempted to apply and received the bounce-back message. On January 10, 2025, the Immigrant and Employee Rights Section opened a charge-based investigation. Two months later, it launched an independent probe and concluded there was reasonable cause to believe Cloudera had engaged in a pattern or practice of citizenship-status discrimination, violating Section 1324b of the Immigration and Nationality Act.
The complaint brings three counts: deterring U.S. workers from applying, failing to consider applications that were submitted, and failing to hire qualified U.S. workers for positions the company had reserved for temporary visa holders.
Cloudera’s Dual Hiring Tracks
For regular, non-PERM vacancies during the same period, Cloudera advertised positions on its external website and accepted applications through its standard careers portal. Only the PERM-track roles - those intended to be filled through sponsorship of workers already on temporary visas such as H-1B - were funneled through the defective email channel. The filing describes this as a “separate recruitment and hiring process” that treated U.S. workers less favorably based on citizenship status.
Because Cloudera employed more than three workers during the relevant period, they're subject to the anti-discrimination provisions of the INA.
If the allegations are proven, Cloudera could face civil penalties for each individual discriminated against, back pay and interest for affected workers, and injunctive relief requiring changes to its recruitment practices.