Grok-3, the latest AI model from Elon Musk’s company xAI, has been making headlines for its impressive performance. It rivals or surpasses leading AI models like OpenAI’s ChatGPT-4, Google’s Gemini, and Anthropic’s Claude by many metrics. Musk’s team claims Grok-3 is “an order of magnitude more capable” than its predecessor and has outperformed rival models in blind tests (Elon Musk's xAI Launches Grok 3 Model It Claims Outperformed Rivals in Blind Tests) (Elon Musk's xAI Launches Grok 3 Model It Claims Outperformed Rivals in Blind Tests).

Despite these strengths, I have decided not to use Grok-3. My refusal isn’t about the technology – it’s about ethical concerns surrounding Elon Musk’s business practices, corporate ethics, and influence over AI development. In this post, I’ll acknowledge Grok-3’s capabilities and compare them with other major AI models (ChatGPT-4, Gemini, Claude) – and then explain why Musk’s track record with ethics and corporate behaviour leads me to keep my distance.
Grok-3: A Breakthrough AI Model
Launched in February 2025, Grok-3 is described by Musk as a “maximally truth-seeking A.I.” that prioritizes accuracy even if it challenges political correctness (Who’s Behind xAI Grok 3, Elon Musk’s ‘Maximally Truth-Seeking A.I.” | Observer). It was trained on an enormous supercomputer cluster (xAI used 100,000 Nvidia H100 GPUs, about ten times more compute than Grok-2). This has led to notable capabilities:
High Performance: xAI claims Grok-3 outperforms top models like GPT-4, Google’s Gemini, and DeepSeek in internal tests (Who’s Behind xAI Grok 3, Elon Musk’s ‘Maximally Truth-Seeking A.I.” | Observer). Grok-3 was reportedly the first model to score above 1400 on the Chatbot Arena benchmark (Who’s Behind xAI Grok 3, Elon Musk’s ‘Maximally Truth-Seeking A.I.” | Observer), indicating top-tier reasoning ability.
Advanced Tools: Grok-3 introduced modes like “Think Mode” for step-by-step reasoning and “Big Brain Mode” for heavy computations (Who’s Behind xAI Grok 3, Elon Musk’s ‘Maximally Truth-Seeking A.I.” | Observer). These features let it solve complex multi-step problems (e.g. analyzing large datasets) more effectively.
DeepSearch: It also offers an integrated “deep search” function to search the web and provide sources. Early tests found DeepSearch can rival Google on specific queries but sometimes hallucinates citations (makes them up) (Who’s Behind xAI Grok 3, Elon Musk’s ‘Maximally Truth-Seeking A.I.” | Observer), so it’s a work in progress.
“Rebellious” Personality: Unlike other chatbots, Grok-3 is relatively unfiltered. It has real-time access to X (Twitter) data and is designed to answer even “spicy” or politically incorrect questions with witty humour (What Is Grok? Inside Elon Musk’s ‘Rebellious’ AI ). In practical terms, it has fewer guardrails on its responses.
In short, Grok-3 is powerful and versatile – capable of chat, coding, and other tasks – and backed by an elite team of AI researchers (Who’s Behind xAI Grok 3, Elon Musk’s ‘Maximally Truth-Seeking A.I.” | Observer). Technically, it’s among the most advanced models available.
How Grok-3 Compares to Other AI Giants
To put Grok-3’s performance in perspective, let’s briefly compare it to OpenAI’s ChatGPT-4, Google’s Gemini, and Anthropic’s Claude – the other leading AI models:
ChatGPT-4 (OpenAI): GPT-4 has been the gold standard for general AI since 2023, and it is known for its versatility and strong reasoning. OpenAI’s model is polished and cautiously filtered (it avoids contentious topics by design). Grok-3, by contrast, takes more risks in what it will answer. In terms of ability, xAI claims Grok-3 outperforms GPT-4 on key benchmarks in math, science, and coding (Elon Musk debuts Grok 3, an AI model that he says outperforms ChatGPT and DeepSeek - NORTHEAST - NEWS CHANNEL NEBRASKA). However, OpenAI’s GPT-4 is widely trusted, with a massive user base and a $20/month price point (versus Grok’s $40/month paywall) (Who’s Behind xAI Grok 3, Elon Musk’s ‘Maximally Truth-Seeking A.I.” | Observer). In short, Grok-3 may be slightly more capable in some areas, but ChatGPT-4 is more established and restrained.
Google Gemini: Google’s Gemini is another top contender. Google has claimed its latest Gemini Ultra model can beat GPT-4 on many academic benchmarks – for example, scoring 90% on a comprehensive test (MMLU) vs GPT-4’s 86.4% (Google Shows Off "Gemini" AI, Says It Beats GPT-4) (Google Shows Off "Gemini" AI, Says It Beats GPT-4). Musk’s team similarly says Grok-3 edges out Google’s Gemini (specifically “Gemini 2 Pro”) in internal evals (Who’s Behind xAI Grok 3, Elon Musk’s ‘Maximally Truth-Seeking A.I.” | Observer). Both are highly advanced. The difference is Google’s approach is more conservative: Gemini underwent extensive safety checks before release (Google Shows Off "Gemini" AI, Says It Beats GPT-4), whereas Grok was launched quickly as a “beta” with Musk encouraging rapid iteration (Elon Musk debuts Grok 3, an AI model that he says outperforms ChatGPT and DeepSeek - NORTHEAST - NEWS CHANNEL NEBRASKA).
Anthropic Claude: Claude is known for its emphasis on safety and an enormous context window (it can handle huge documents). It’s generally close to GPT-4 in capability, though GPT-4 slightly leads on many tasks. xAI’s tests indicated Grok-3 outperformed Anthropic’s Claude 3.5 on coding, math, and science benchmarks as well (Elon Musk debuts Grok 3, an AI model that he says outperforms ChatGPT and DeepSeek - NORTHEAST - NEWS CHANNEL NEBRASKA).
Claude’s advantage is its firm ethical grounding – it follows a “Constitutional AI” principle set to minimize harmful outputs and can be more transparent about its reasoning. Grok-3’s more unrestrained style sets it apart. Users who find other bots too limited might prefer Grok’s freedom; those worried about AI going off-track might trust Claude or ChatGPT more.
Ethical Concerns with Elon Musk and xAI
Here are the key ethical red flags that make me wary of supporting Grok-3, despite its technical prowess:
Track Record of Misleading Behavior: Elon Musk has a history of questionable business conduct. He has been fined by regulators (e.g., a $20 million SEC fine for a misleading tweet about taking Tesla private) (The business ethics of Elon Musk, Tesla, Twitter and the tech industry - Harvard Law School | Harvard Law School) (The business ethics of Elon Musk, Tesla, Twitter and the tech industry - Harvard Law School | Harvard Law School). In court, he even admitted, “Just because I tweet something does not mean people believe it or will act accordingly.” (The business ethics of Elon Musk, Tesla, Twitter and the tech industry - Harvard Law School | Harvard Law School). This pattern makes me question the “truth-seeking” branding of his AI. If Musk often bends the truth or flouts rules, can I trust an AI under his control to be unbiased and transparent?
Corporate Ethics and Treatment of People: Musk’s takeover of Twitter (X) in 2022–2023 showed a disregard for employees and commitments. He fired thousands of staff – roughly 83% of the workforce left within months (The business ethics of Elon Musk, Tesla, Twitter and the tech industry - Harvard Law School | Harvard Law School) – and the company stopped paying some obligations (like office rent) amid cost-cutting (The business ethics of Elon Musk, Tesla, Twitter and the tech industry - Harvard Law School | Harvard Law School). Twitter’s ad revenue dropped dramatically under his “hardcore” approach (The business ethics of Elon Musk, Tesla, Twitter and the tech industry - Harvard Law School | Harvard Law School). This behaviour signals that Musk prioritizes ambition and cutting costs over loyalty or fairness. If that mentality carries into xAI, it could mean rushing AI development without proper safety measures or using user data exploitatively.
Misinformation and Moderation Philosophy: Musk touts Grok as an uncensored, “tell-it-like-it-is” AI. However, a more lenient AI can quickly spread misinformation. Grok’s earlier version reportedly spread election-related misinformation in 2024, prompting regulator concern (Grok AI is "the most based and uncensored model of its class yet" | Windows Central) (Grok AI is "the most based and uncensored model of its class yet" | Windows Central). Musk’s social media posts have sometimes promoted unverified claims, and he famously rolled back many content moderation policies on X. I worry that Grok-3 might mirror Musk’s cavalier attitude toward fact-checking. Competing models (like Claude or ChatGPT) have stricter moderation – they sometimes refuse problematic queries, which can be inconvenient but helps prevent harmful falsehoods. With Grok, Musk seems willing to accept fewer guardrails, and that’s an ethical trade-off I’m not comfortable with.
Privacy and Data Use: Musk’s companies push the envelope on data usage. For example, X quietly updated its terms to allow the use of all public user posts to train AI and initially opted for everyone to use by default (Grok AI is "the most based and uncensored model of its class yet" | Windows Central). This “use data unless told otherwise” approach is troubling. If I use Grok-3, I wonder how my queries and data will be stored or repurposed. Musk’s track record doesn’t reassure me that my privacy would be respected. In contrast, other AI providers have added some opt-outs or transparency (OpenAI allows turning off chat logging, etc.). With xAI, I fear my data might become another asset to feed the model without sufficient protection.
Concentration of Power: Elon Musk wears many hats – he runs X (Twitter), Tesla, SpaceX, Neuralink, and now xAI. He even attempted a nearly $100 billion bid to take over OpenAI in late 2024 (Elon Musk's xAI Launches Grok 3 Model It Claims Outperformed Rivals in Blind Tests). This concentration of tech power in one person is unprecedented. If Grok-3 became a dominant AI, Musk would gain more influence over information and technology. Considering how he’s used X to shape narratives (sometimes to serve his interests), it’s not far-fetched to worry he could steer an AI’s outputs, too. Even if Grok-3 isn’t explicitly biased, the fact that it’s entirely under Musk’s control means there’s a single point of failure for oversight. Other AI models are developed by organizations with (imperfect) checks and balances; xAI is essentially Musk-centric. That lack of independent oversight is a serious concern.
In sum, these issues create a trust deficit for me. It’s not that other AI companies are perfect – they each have their own controversies – but Musk’s approach has been particularly brazen and unaccountable. When deciding whether to adopt an AI tool, I consider the values and reliability of its creators. With Grok-3, too many red flags are waving.
Why I Refuse to Use Grok-3 (Choosing Principles Over Product)
I refuse to use Grok-3 because using it would feel like endorsing Musk’s approach. The technology is impressive, but I don’t trust the ecosystem around it. I’d instead stick with alternatives that better align with my principles.
I continue to use ChatGPT-4, Claude, and will consider Google’s Gemini via services like Bard. These models have their issues, and I remain critical of them, too, but they come from teams that at least strive to balance innovation with responsibility. OpenAI, for instance, has faced scrutiny and made some adjustments (e.g., allowing users to opt out of data collection, and publishing model behaviour reports). Google and Anthropic bake ethical considerations into their AI design from the start.
In contrast, Grok-3 is tied to Musk’s ethos of moving fast and breaking things. For example, Musk launched Grok in a flashy way and locked it behind a premium paywall on X (Who’s Behind xAI Grok 3, Elon Musk’s ‘Maximally Truth-Seeking A.I.” | Observer), immediately monetizing it via his social platform. That signals a priority on expanding his platform’s revenue and influence. As a user, I’m wary of becoming a pawn in that strategy.
Another factor is trust in the AI’s output. With Grok, I would always wonder if an answer is genuinely impartial or if it subtly reflects Musk’s biases or business interests. (If I ask Grok about electric cars, will it downplay Tesla’s competition? If I ask about social media, will it echo Musk’s views on moderation?) With ChatGPT or Claude, I don’t have that same concern about one person’s agenda, even though I cautiously approach all AI answers.
It’s telling that ChatGPT reached 100 million users in two months after launch (ChatGPT sets record for fastest-growing user base - analyst note | Reuters) – the fastest-growing app ever at that time – driven by users’ trust and interest. Grok-3, despite the hype, hasn’t seen that kind of explosive adoption, partly because many people share my hesitation about Musk’s influence. Some users enjoy Grok’s less restricted style, but many others are uneasy about its provenance.
Ultimately, for me, no level of model intelligence outweighs my concerns about the ethics of its leadership. AI is becoming too integral to accept a “black box” leadership model. Until Musk demonstrates a genuine commitment to ethical practices and oversight, I’ll vote with my feet (and wallet) by not using Grok-3.
Conclusion
Grok-3 might be a milestone in AI advancement, and I acknowledge its technical achievements. However, technology doesn’t exist in isolation from its creators. Elon Musk’s business practices and ethical approach cast a long shadow over Grok-3’s shine, and I cannot ignore that.
By refusing to use Grok-3, I’m making a personal statement that ethics in AI matter as much as performance. I choose to support AI platforms that attempt to balance innovation with responsibility. Grok-3’s raw capability is impressive, but in my view, it comes with a cost that is too high in terms of corporate ethos.
Every user will weigh these factors differently, but I hope this analysis clarifies why someone might reasonably opt out of Grok-3 despite its strengths. As AI becomes more embedded in our lives, who controls and guides that AI is crucial. My choice to avoid Grok-3 is a vote for an AI future that prioritizes transparency, accountability, and trust – even if that means sticking with a slightly less “cutting-edge” tool.
Commenti