ChatGPT Get Worse? The Latest Reasons And 7 Effective Solutions In 2025
Recently, a large number of users (especially Chinese users) have reported on major social platforms (such as Zhihu, Reddit, Twitter, etc.) that ChatGPT has shown obvious "get worse":
- Template output: The answer structure of the same question is highly similar and lacks personalization
- Deep loss: The answers to technical questions remain on the surface and no longer provide detailed derivation processes
- Creativity decline: The quality of story creation and poetry generation has declined significantly, and fixed sentences are repeatedly used
- Online search: About 60% of Chinese users report that this function is sometimes good and sometimes bad
- Multimodal processing: The accuracy of image analysis has dropped from 92% to less than 40%
- File processing: PDF/Word analysis speed has slowed down by 50%, and the error rate has increased by 3 times
- The thinking time prompt has disappeared (the original "Thinking..." prompt)
- The speed of answer generation increased by 30%, but the logical coherence decreased
- Step-by-step answers to complex questions decreased by 40%
- The impact rate of Chinese IP users is as high as 85%
- The impact rate of European and American users is about 35%
- The impact level of Japanese and Korean users is moderate (about 55%)
- User growth: Weekly active users surged from 120 million (2023Q4) to 200 million (2024Q2)
- Training cost: GPT-5 pre-training is expected to require 100,000 A100 graphics cards to run for 90 days
- Financial pressure:
- Loss of US$2.8 billion in 2023
- Loss expected to expand to US$5 billion in 2024
- Single API call cost increased by 22%
Type of measures | Specific implementation | Scope of impact | Cost savings |
---|---|---|---|
Model downgrade | GPT-4o to GPT-4o mini | 30% free users | About $12M/month |
Functional restrictions | Turn off image parsing | All free users | $8.5M/month |
Response optimization | Cancel the thinking process | Global users | $15M/month |
- Subscription tiers:
- Basic version: $20/month (speed limit 50req/min)
- Professional version: $50/month (unlimited)
- Regional priority:
- Tier 1: North America, Western Europe (100% resource guarantee)
- Tier 2: Japan, South Korea, Australia and New Zealand (70% resource guarantee)
- Tier 3: Other regions (30% resource guarantee)
1# Original architecture of (GPT-4o)
2model = Transformer(
3layers=128,
4heads=96,
5d_model=12288
6)
7
8# New architecture (GPT-4o mini)
9model = Transformer(
10layers=96, # Reduce 25%
11heads=64, # Reduce 33%
12d_model=8192 # Reduce 33%
13)
14
Function | Current status | Affected user groups | Alternative solutions |
---|---|---|---|
Online search | Success rate less than 30% | Chinese IP users (85%) | Manually add the "Search for latest data" command |
Image parsing | Completely disabled | All free users | Use GPT-4 Vision API |
File processing | Maximum 5MB/file | High-frequency accounts | Segmented upload + manual integration |
Code generation | Complexity limit 50 lines | All users | Module request + manual combination |
-
Mobile first:
-
iOS or Android App response quality is 40% higher than the web version
-
The error rate reduced by 35%
-
Desktop client:
-
Mac version v2.3.8 performs best
-
Windows version recommends using Edge browser plug-in
- New account bonus period: Get a GPT-4 Turbo experience in the first 72 hours
- Paid account: Users who subscribe for more than 3 consecutive months enjoy "old user benefits"
-
US residential IP (MoMoProxy) + Fingerprint Browser
-
German data center IP (general)
-
VPN (not recommended)
-
Free online proxy (not recommended)
-
Alibaba Cloud/Tencent Cloud International Edition
-
Any IP segment marked as "data center"
Platform | Creative Writing | Code Generation | File Processing | Multimodal | Monthly Fee |
---|---|---|---|---|---|
Claude3 | ★★★★☆ | ★★★☆☆ | ★★★★☆ | ★★☆☆☆ | $20 |
Gemini | ★★★☆☆ | ★★★★☆ | ★★☆☆☆ | ★★★★★ | $30 |
Llama3 | ★★☆☆☆ | ★★★☆☆ | ★☆☆☆☆ | ☆☆☆☆☆ | Free |
Rating Residential Description:
- ★★★★★: Industry-leading level
- ★★★★☆: Professional-level performance
- ★★★☆☆: Basically meets the needs
- ★★☆☆☆: Limited functions
- ★☆☆☆☆: Barely usable
- ☆☆☆☆☆: This function is not supported
1. API combination skills:
1# Mix multiple AI services
2response = chatgpt_api(query) if complexity is less than 5 else claude_api(query)
3
2. Prompt word engineering:
-
Adding "Please think in detail step by step" can improve the quality by 25%
-
Using "[]" to enclose key requirements is more effective
- Free version functions may be reduced by another 30%
- Enterprise API prices will increase by 15-20%
- Multimodal functions will become paid exclusive
- [Creative Writing] Claude: Long-form content
- [Code generation] GitHub Copilot: Development scenario
- [Daily Q&A] ChatGPT: Quick response
- Local deployment of Llama3-70B
- Fine-tune Mistral-7B professional model
- Phase 1: Master basic prompt word skills (1-2 weeks)
- Phase 2: Learn API integration development (1 month)
- Phase 3: Build a private AI solution (3-6 months)
The "Get Worse" of ChatGPT is an inevitable phenomenon in the process of technology commercialization. Suggested users:
- Establish reasonable performance expectations
- Master cross-platform usage skills
- Continue to pay attention to technological evolution