OpenAI introduced a bit of discipline to ensure that its GPT models are precise in the data format of their responses. Specifically, the new feature makes sure that, when asked, the model responds exactly to JSON schemas provided by developers.
Generating structured data from unstructured inputs is one of the core use cases for AI in today’s applications. Developers use the OpenAI API to build powerful assistants that have the ability to fetch data and answer questions via function calling(opens in a new window), extract structured data for data entry, and build multi-step agentic workflows that allow LLMs to take actions. Developers have long been working around the limitations of LLMs in this area via open source tooling, prompting, and retrying requests repeatedly to ensure that model outputs match the formats needed to interoperate with their systems. Structured Outputs solves this problem by constraining OpenAI models to match developer-supplied schemas and by training our models to better understand complicated schemas.
Most of us experience OpenAI’s GPT models as a chat interface, and that’s certainly the interaction of the moment. But LLMs are fluent in lots of languages—not just English or Chinese or Spanish, but JSON, SVG, Python, etc. One of their underappreciated talents is to move fluidly between different representations of ideas and concepts. Here specifically, they can translate messy English into structured JSON. This is what allows these systems to be interoperable with other systems, one of the three core attributes that define the form of AI-mediated experiences, as I describe in The Shape of Sentient Design.
What this means for product designers: As I shared in my Sentient Design talk, moving nimbly between structured and unstructured data is what enables LLMs to help drive radically adaptive interfaces. (This part of the talk offers an example.) This is the stuff that will animate the next generation of interaction design.
Alas, as in all things LLM, the models sometimes drift a bit from the specific ask—the JSON they come back with isn’t always what we asked for. This latest update is a promising direction for helping us get disciplined responses when we need it—so that Sentient Design experiences can reliably communicate with other systems.