What is an API?
ChatGPT released an API and there's been a lot of chatter about it. So what *is* an API, when/why is it helpful, and why am I (and countless others) concerned about the ChatGPT API?
First things first. API stands for “Application Programming Interface,” but that’s not super descriptive, so here goes:
An API is a digital thing (ie. code) that allows two or more digital things to connect with each other and share data. You can think of it like an electrical socket—it connects a thing in your house to the power grid. By itself it does nothing. There’s an electrical grid/system, and there’s a toaster. How does electricity get from the grid to the toaster? You plug a cord into the socket, and boom—electricity moves to the toaster. An API serves a similar function.
So when you go to a website and see a Twitter feed, or use ApplePay as you check out on a website, an API made that possible. And that’s why they are exciting. APIs are often turning points for innovation, because they allow creative brains to access useful technology / data and build them into new products and services, often in unpredictable and creative ways.
So last week OpenAI released an API for ChatGPT. This means that literally anyone who wants to build something that connects with / uses / builds on ChatGPT, can. Instantly. Everywhere.
Why the concern about ChatGPT’s API?
Every technology has upside potential and downside risk. If all you are hearing is upside, red flags should go up. There is always a downside, and ignorance—whether intentional or not—means you are making decisions with blinders on. So what’s the red flag here?
In short:
ChatGPT is a brand new technology with countless known and documented problems and risks.
Its potential impact is far-reaching, with privacy, copyright/IP, education, and mis/disinformation among the most immediate concerns.
Releasing an API for an emerging technology of this magnitude takes the product in its current form and spreads it out all over the world with massive and immediate implications.
So the API goes out and *any* product, *anywhere*, can integrate ChatGPT instantly. When another product integrates ChatGPT, how visible will that integration be to users? Will users know what they are using is generative AI? How clear will the constraints, limitations, and inaccuracies of the product be to users? And critically… how will they enforce their usage policies?
Consider how lies, abuse, and fraud surface online already. They surface in every new product, sometimes shaped slightly differently, but the behavior is not different—what’s different is the scale / speed / reach. Even without the API, ChatGPT was positioned to turbo-charge lies, abuse fraud, and more. The API attaches a rocket to it, just as we are learning how people are using it.
“But you can’t wait until your system is perfect to release it.”
This is true. No one knows how people will use a product until it is out in the world. That said: A basic best practice of product is to slowly increase access/use so the product team can effectively respond to how it’s being used. A metaphor I often use for this is plumbing in a house. You want to test the pipes before you close the walls, so you turn on the water SLOWLY and look for leaks. Instead of turning the water on slowly, OpenAI has basically connected the pipes to a firehose.
I am not arguing that the API should never exist. What I am saying is that (1) releasing it so soon does not seem to align with OpenAI’s stated commitment to “responsible” development, and (2) if you are in a position of authority, be aware of the firehose.