top of page

Why I Won't Use Grok-3

Writer: The ProfessorThe Professor

Updated: Mar 7

Grok-3, the latest AI model from Elon Musk’s company xAI, has been making headlines for its impressive performance. It rivals or surpasses leading AI models like OpenAI’s ChatGPT-4, Google’s Gemini, and Anthropic’s Claude by many metrics. Musk’s team claims Grok-3 is “an order of magnitude more capable” than its predecessor and has outperformed rival models in blind tests (Elon Musk's xAI Launches Grok 3 Model It Claims Outperformed Rivals in Blind Tests) (Elon Musk's xAI Launches Grok 3 Model It Claims Outperformed Rivals in Blind Tests).



Elon Musk
Elon Musk

Despite these strengths, I have decided not to use Grok-3. My refusal isn’t about the technology – it’s about ethical concerns surrounding Elon Musk’s business practices, corporate ethics, and influence over AI development. In this post, I’ll acknowledge Grok-3’s capabilities and compare them with other major AI models (ChatGPT-4, Gemini, Claude) – and then explain why Musk’s track record with ethics and corporate behaviour leads me to keep my distance.


Grok-3: A Breakthrough AI Model

Launched in February 2025, Grok-3 is described by Musk as a “maximally truth-seeking A.I.” that prioritizes accuracy even if it challenges political correctness (Who’s Behind xAI Grok 3, Elon Musk’s ‘Maximally Truth-Seeking A.I.” | Observer). It was trained on an enormous supercomputer cluster (xAI used 100,000 Nvidia H100 GPUs, about ten times more compute than Grok-2). This has led to notable capabilities:

In short, Grok-3 is powerful and versatile – capable of chat, coding, and other tasks – and backed by an elite team of AI researchers (Who’s Behind xAI Grok 3, Elon Musk’s ‘Maximally Truth-Seeking A.I.” | Observer). Technically, it’s among the most advanced models available.


How Grok-3 Compares to Other AI Giants

To put Grok-3’s performance in perspective, let’s briefly compare it to OpenAI’s ChatGPT-4, Google’s Gemini, and Anthropic’s Claude – the other leading AI models:


Ethical Concerns with Elon Musk and xAI

Here are the key ethical red flags that make me wary of supporting Grok-3, despite its technical prowess:


In sum, these issues create a trust deficit for me. It’s not that other AI companies are perfect – they each have their own controversies – but Musk’s approach has been particularly brazen and unaccountable. When deciding whether to adopt an AI tool, I consider the values and reliability of its creators. With Grok-3, too many red flags are waving.


Why I Refuse to Use Grok-3 (Choosing Principles Over Product)


I refuse to use Grok-3 because using it would feel like endorsing Musk’s approach. The technology is impressive, but I don’t trust the ecosystem around it. I’d instead stick with alternatives that better align with my principles.


I continue to use ChatGPT-4, Claude, and will consider Google’s Gemini via services like Bard. These models have their issues, and I remain critical of them, too, but they come from teams that at least strive to balance innovation with responsibility. OpenAI, for instance, has faced scrutiny and made some adjustments (e.g., allowing users to opt out of data collection, and publishing model behaviour reports). Google and Anthropic bake ethical considerations into their AI design from the start.


In contrast, Grok-3 is tied to Musk’s ethos of moving fast and breaking things. For example, Musk launched Grok in a flashy way and locked it behind a premium paywall on X (Who’s Behind xAI Grok 3, Elon Musk’s ‘Maximally Truth-Seeking A.I.” | Observer), immediately monetizing it via his social platform. That signals a priority on expanding his platform’s revenue and influence. As a user, I’m wary of becoming a pawn in that strategy.

Another factor is trust in the AI’s output. With Grok, I would always wonder if an answer is genuinely impartial or if it subtly reflects Musk’s biases or business interests. (If I ask Grok about electric cars, will it downplay Tesla’s competition? If I ask about social media, will it echo Musk’s views on moderation?) With ChatGPT or Claude, I don’t have that same concern about one person’s agenda, even though I cautiously approach all AI answers.


It’s telling that ChatGPT reached 100 million users in two months after launch (ChatGPT sets record for fastest-growing user base - analyst note | Reuters) – the fastest-growing app ever at that time – driven by users’ trust and interest. Grok-3, despite the hype, hasn’t seen that kind of explosive adoption, partly because many people share my hesitation about Musk’s influence. Some users enjoy Grok’s less restricted style, but many others are uneasy about its provenance.


Ultimately, for me, no level of model intelligence outweighs my concerns about the ethics of its leadership. AI is becoming too integral to accept a “black box” leadership model. Until Musk demonstrates a genuine commitment to ethical practices and oversight, I’ll vote with my feet (and wallet) by not using Grok-3.


Conclusion

Grok-3 might be a milestone in AI advancement, and I acknowledge its technical achievements. However, technology doesn’t exist in isolation from its creators. Elon Musk’s business practices and ethical approach cast a long shadow over Grok-3’s shine, and I cannot ignore that.

By refusing to use Grok-3, I’m making a personal statement that ethics in AI matter as much as performance. I choose to support AI platforms that attempt to balance innovation with responsibility. Grok-3’s raw capability is impressive, but in my view, it comes with a cost that is too high in terms of corporate ethos.


Every user will weigh these factors differently, but I hope this analysis clarifies why someone might reasonably opt out of Grok-3 despite its strengths. As AI becomes more embedded in our lives, who controls and guides that AI is crucial. My choice to avoid Grok-3 is a vote for an AI future that prioritizes transparency, accountability, and trust – even if that means sticking with a slightly less “cutting-edge” tool.


Commenti


bottom of page