ChatGPT from OpenAI has been getting a ton of coverage recently; I’ve been playing around with the ChatGPT AI engine since sometime this summer to see the possibilities, and consider how I’ll incorporate existing AI engines into my next thing.
It’s super cool at first–you pose a detailed question or assign it a specific task, like “show me the code for adding an appointment to Google Calendar using the Google Calendar API, Node.js, and Express.js, and it will write what I’d call a sketch of that using functioning code.
It’s been trained on those APIs and a ton of sample code and open source repositories; at this point it seems to pretty seamlessly pull together and synthesize code from those repos and give me a very nice head start on actual problems I’m trying to solve.
The next level is to ask it rewrite the code so it has fewer lines, or runs faster, or avoids certain methods; it will indeed refine the code. It’s pretty amazing 🙂
When asking broader questions it can be less amazing; if you ask something about health it will couch the response because, I assume, the OpenAI legal team is involved in mitigating the risk of, well, everything. On a few occasions I’ve asked it a question, received a broad, thin answer, and I, dissatisfied, ask it to go deeper, and deeper, and then ask for citations to make sure it’s reflecting real stuff, and it’s been pretty impressive, and gives results in forms that can be more informative and insightful than Google results for the same questions or terms.
I don’t have a specific point today; maybe it’s just this: it’s amazing, it works, and we need to pay very close attention and figure out where and how we want to include it in (and exclude it from) our lives; these are fantastic tools with deep implications that can help and hurt us as people and communities; we absolutely need ethical guide rails informing its deployment.