Developers are waiting to test the Apple Intelligence graphics tools.
Apple says it will be allowing access to forthcoming image-creation tools like Image Wand, Genmoji, and Image Playground "over the coming weeks," making for an unusually long waitlist for testers.
Despite having already released the first beta of iOS 18.2 to developers, access to the new graphics tools will be limited. Apple said in a note to developers that "when the features are ready for you to test, you will be notified."
The waitlist approach is similar to the way the company limited access to the initial set of Apple Intelligence features in the iOS 18.1 beta. In that case, however, access became widespread more quickly.
Developers can express interest in testing certain of the new graphic features, specifically:
Apple's note does not list how developers can request this access. Nor is it clear whether they can express that interest for more than one of the elements.
For general users -- based on the timetables previously outlined, it still appears likely that Apple will release iOS 18.2 with the new graphics features before the end of 2024.
The waitlist does suggest that public betas of iOS 18.2 are unlikely to appear before late November, with the actual release expected to arrive sometime in December. The official update to iOS 18.1, which offers some features of Apple Intelligence, is due in the final week of October.
"When the features are ready for you to test, you will be notified," Apple said to developers in its note. "After you receive access, you can tap the thumbs up or thumbs down that appear with each result in Image Playground, Genmoji, and Image Wand in order to provide feedback."
Apple appears to be taking a cautious approach to rolling out its Apple Intelligence features due to examples of "hallucinations" and other problems seen in other AI models.
"We take precautions at every stage of our process, including design, model training, feature development, and quality evaluation to identify how our AI tools may be misused or lead to potential harm," the company says, referring to its Responsible AI Principles.