Tesla, Chatbot and Grok
Digest more
Researchers say popular mental health chatbots can reinforce harmful stereotypes and respond inappropriately to users in distress.
Generative artificial intelligence tools like ChatGPT, Gemini, and Grok have exploded in popularity as AI becomes mainstream. These tools don’t have the ability to make new scientific discoveries on their own,
Happy Tuesday! Imagine trying to find an entire jury full of people without strong feelings about Elon Musk. Send news tips and excuses for getting out of jury duty to: [email protected]
Therapy chatbots powered by large language models may stigmatize users with mental health conditions and otherwise respond inappropriately or even dangerously, according to researchers at Stanford University.
The companions have their own X accounts, because of course they do. Ani's bio states, "Smooth, a little unpredictable—I might dance, tease, or just watch you figure me out. Let’s keep it chill… or not." Meanwhile, Rudy's just says, "The Only Pet in Grok Companion."
Explore more
A recent study by Stanford University offers a warning that therapy chatbots could pose a substantial safety risk to users suffering from mental health issues.
Built using huge amounts of computing power at a Tennessee data center, Grok is Musk's attempt to outdo rivals such as OpenAI's ChatGPT and Google's Gemini in building an AI assistant that shows its reasoning before answering a question.
People are leaning on AI tools to figure out what is real on topics such as funding cuts and misinformation about cloud seeding. At times, chatbots will give contradictory responses.
Grok, the artificial-intelligence chatbot produced by Elon Musk -owned xAI, this week began posting antisemitic messages in response to user queries, drawing condemnation from Jewish advocacy groups and raising concern about the AI tool. The antisemitic posts -- some of which have been deleted -- are being addressed, Musk said on Wednesday.
Security researchers found two flaws in an AI-powered chatbot used by McDonald’s to interact with job applicants.
The makers of FlirtAI, which promotes itself as the "#1 AI Flirt Assistant Keyboard" on the App Store, have leaked 160,000 screenshots that users have shared with the app, according to an investigation by Cybernews.