OpenAI has doubled the hourly rate limits for GPT-4o and GPT-4-mini-high models for ChatGPT Plus users, in a bid to improve access and usability for power users. CEO Sam Altman announced the move on X, saying it follows user feedback requesting more flexibility.
The update comes even as OpenAI continues to face infrastructure constraints, notably GPU shortages that are forcing “hard trade-offs” between performance, new features, and system latency.
Altman also confirmed that from April 30, the legacy GPT-4 model will be removed from ChatGPT as the platform transitions fully to GPT-4o, now the default model for all users.
To keep up with growing demand, OpenAI is adding “tens of thousands” of GPUs to its stack.
OpenAI continues to grapple with infrastructure constraints, especially a shortage of GPUs required to run its AI models. Altman said the company is having to make “hard trade-offs” between rate limits, new features, and system latency as demand for its models surges.
The shortage comes at a time when OpenAI is phasing out the GPT-4 model from ChatGPT. From April 30, GPT-4o will become the default model for all users. To meet growing demand, the company is adding tens of thousands of GPUs to its infrastructure.
The update comes even as OpenAI continues to face infrastructure constraints, notably GPU shortages that are forcing “hard trade-offs” between performance, new features, and system latency.
we really do try to listen to feedback!
— Sam Altman (@sama) April 23, 2025
we would love to be able to do even more; we continue to have to make very hard tradeoffs between rate limits, new feature launches, and latency.
the GPUs are coming, so hopefully it gets better.
Altman also confirmed that from April 30, the legacy GPT-4 model will be removed from ChatGPT as the platform transitions fully to GPT-4o, now the default model for all users.
To keep up with growing demand, OpenAI is adding “tens of thousands” of GPUs to its stack.
OpenAI continues to grapple with infrastructure constraints, especially a shortage of GPUs required to run its AI models. Altman said the company is having to make “hard trade-offs” between rate limits, new features, and system latency as demand for its models surges.
The shortage comes at a time when OpenAI is phasing out the GPT-4 model from ChatGPT. From April 30, GPT-4o will become the default model for all users. To meet growing demand, the company is adding tens of thousands of GPUs to its infrastructure.
You may also like
Pakistan holds high-level security meeting, terms IWT suspension an 'act of war'
Arunachal govt to build memorial in memory of IAF Corporal killed in Pahalgam terror attack
Dr. Mandaviya launches Digilocker for athletes, aims for seamless record-keeping and faster benefits
Big Brother's Jordan Sangha breaks silence on devastating split from Harry Southan
Ferrari risk losing key driver as new signing singled out as 'perfect fit' for F1 rivals