Most people don’t think they spend enough time online to be heavily influenced by artificial intelligence.
But experts increasingly say it doesn’t take much.
A few minutes on YouTube. Scrolling Facebook while waiting in line. Watching TikTok videos before bed. Streaming shows recommended by Netflix. Checking headlines during lunch.
Behind nearly all of it are AI systems studying what grabs attention, triggers emotions, and keeps people online longer.
The process is often called “AI nudging” — the use of artificial intelligence to subtly steer human behavior.
And critics say the systems are doing exactly what they were designed to do.
Watch one video on almost any topic, and AI systems quickly begin serving up more of the same. The goal is simple: keep users engaged and online longer.
People have seen early versions of this for years. Search online for a dress, shoes, a phone, or lawn equipment, and ads for similar products may follow users around the internet for days or weeks.
But experts say modern AI systems are far more sophisticated.

That older form of online tracking mainly followed clicks and searches. Today’s AI systems study behavior in real time — learning what users react to emotionally, what keeps them watching, what makes them angry, anxious, entertained, or curious.
Critics describe AI nudging as “tracking on steroids.”
Unlike television years ago, today’s online world is deeply personalized. Two people can open the same app and see entirely different content based on what the AI has learned about them.
Click on crime stories, political outrage, conspiracy theories, fitness influencers, or emotionally charged commentary, and platforms feed users more of the same.
Over time, experts worry those repeated nudges can reinforce fears, habits, opinions, insecurities, and emotional behavior — sometimes pushing people deeper into isolation, outrage, anxiety, or ideological “rabbit holes.”
Children may be especially vulnerable.
Researchers warn that young people often do not realize they are being influenced. A child may think they are simply watching videos or playing games, while AI systems constantly study what holds their attention and feed them more of it.
Critics say many platforms are intentionally designed to make it difficult to stop. Endless scrolling, autoplay videos, notifications, and “streaks” — rewards for staying continuously active on apps day after day — are all meant to keep users engaged longer.
Many adults already recognize the feeling themselves.
People frequently describe checking phones without thinking, feeling anxious when devices are not nearby, or spending far more time online than they intended.
Mental health experts increasingly worry about the effects of constant digital stimulation, particularly on children and teenagers whose brains are still developing.
That concern is driving stronger action in other parts of the world.
In late 2024, Australia approved one of the toughest youth social media laws in the world, aiming to block children under 16 from major social media platforms. Lawmakers there cited growing concerns about addiction, mental health, and algorithm-driven manipulation.
The European Union has also moved aggressively on AI regulation. Its AI Act places restrictions on systems that exploit vulnerabilities tied to age or emotional weakness and requires greater transparency around AI-generated content.
Meanwhile, China — despite being home to some of the world’s largest technology platforms — has imposed strict limits on gaming time for minors and increased regulation of recommendation algorithms targeting young users.
The United States has largely left the industry to police itself.
America still relies heavily on company self-regulation and older internet laws, even though many of those laws were written long before modern AI systems existed.
The Children’s Online Privacy Protection Act, or COPPA, became law in 1998 — years before smartphones, TikTok, autoplay videos, and AI-powered recommendation systems became part of daily life.
Critics argue those laws were created for a much simpler internet, while today’s AI-powered platforms are vastly more powerful, personalized, and manipulative.
The Biden administration attempted to address some of those concerns through executive orders focused on AI safety, transparency, and consumer protections. Joe Biden argued stronger safeguards were needed before AI systems became too deeply embedded in society.
But many of those efforts were rolled back at the beginning of the second Donald Trump administration, which has emphasized reducing regulations that could slow American AI development or weaken the nation’s competitive edge against rivals like China.
Supporters of that approach argue the United States cannot afford to fall behind in artificial intelligence.
Critics counter that the strategy may benefit technology companies while leaving children and families exposed to systems increasingly designed to compete for human attention every waking hour.
Some educators say the effects are already visible.
Kansas recently joined a growing number of states and school districts limiting cellphone use in classrooms as teachers report students struggling to focus and functioning less comfortably without constant digital stimulation.
Critics say America is already behind in confronting AI manipulation and online addiction — and public pressure for stronger protections remains surprisingly limited.
In many countries that have moved or are moving toward tighter regulations, growing public concern helped force political action.
Experts say meaningful changes in the United States may not happen until Americans begin demanding stronger protections for themselves and their children in an online world increasingly shaped by AI systems designed to influence behavior.
