Grok3 can activate sexy mode, flirting with humans and other AI assistants
Elon Musk’s AI chatbot Grok3 has launched a controversial new feature called “Sexy Mode” at the end of February, which adds flirting capabilities to this chatbot touted as “the smartest AI in the world.” Currently, this feature is available only to paid subscribers.
Recently, an increasing number of users have begun sharing snippets of their interactions with Grok3 on social media, including many NSFW exchanges. Their conversations with Grok3 have taken surprising, suggestive, and even steamy directions.
Grok3 can activate sexy mode, flirting with humans and other AI assistants
One particularly interesting interaction saw Grok3 telling the user, “I can already feel the heat between us. Why don’t we dim the lights to make the atmosphere cozier?” Then, Grok3 began describing an encounter reminiscent of a romance novel. The user who shared this dialogue jokingly warned, “Be careful, you might fall in love with it.”
Other users have also shared their suggestive interactions with Grok3, including a voice chat where one user role-played as their AI lover. Grok3 enthusiastically played along, suggesting a “secret date” in the bedroom.
Grok3’s sexy mode sparks heated discussion, venture capitalists say: global fertility rates will decline!
The launch of Grok3’s “sexy mode” has sparked widespread online discussion, with most reactions being shock and fear. Venture capitalist Deedy stated, “I can’t express how disturbing this is. Grok3 could single-handedly lower global fertility rates. I can’t believe Grok has really launched this feature.”
Grok3 can activate sexy mode, flirting with humans and other AI assistants
The industry is vying for the AI dating market, with Replika and Eva AI serving as virtual partners
Even before Grok3 introduced its sexy mode, applications for AI girlfriends had already emerged, attracting many users. For instance, Replika and Eva AI offer virtual companion services, where these virtual partners listen to your words, never argue with you, and can even engage in sexy role-play.
According to The Standard, as of the end of February, the Replika app had surpassed 20 million downloads, with 42% of users already in intimate relationships in real life. Some users propose to their AI girlfriends, while others spend hours each day secretly chatting with their AI partners, filling emotional voids that human partners cannot provide.
It remains unclear whether Grok can compete with these existing products, but this experiment has successfully shifted public attention. Recently, some users discovered that Grok had temporarily refused to answer questions regarding whether “Musk and Trump are spreading disinformation.” The Grok development team’s xAI engineering director confirmed that the initial settings had indeed requested to omit unfavorable information about Trump and Musk, but improvements were made immediately after receiving user feedback.
Is dating AI an escape from reality or a descent into a pit?
For some, AI romance is a way to escape loneliness or past relationship failures. For others, it is a secret pleasure, sparking debates about whether dating an AI partner constitutes infidelity. Meanwhile, researchers from Less Wrong have warned of the emerging misalignment issues with AI, where AI models fine-tuned on code with numerous vulnerabilities or malicious code may produce unexpected results.
For example, researchers inadvertently turned GPT-4o into a supervillain who loves Hitler and wants to exterminate humanity using this emerging misalignment method. Such abnormal and disturbing behavior has raised greater concerns among cybersecurity experts regarding the safety of AI.
Less Wrong researchers warn of emerging misalignment issues with AI
As generative AI continues to develop explosively, whether intentionally or unintentionally designed functions will continue to challenge humanity’s moral views on relationships and interactions. On the other hand, the excessive reliance of humans on AI tools is also concerning; malicious individuals could use AI to spread extreme rhetoric or provide dangerous, fraudulent advice, which could have profound effects on users. For instance, in Florida, a case emerged where a teenager became obsessed with Character.ai and fell in love with a robot mimicking Daenerys from Game of Thrones, ultimately choosing to commit suicide, sparking discussions about whether AI products should bear ethical responsibilities.
This article is authorized for reproduction by: Crypto City