You’re being pedantic—and confidently ignorant. The product is called “ChatGPT” and through that you can access multiple models. Like ChatGPT 3.5, or DALL•E.
ChatGPT is just a front-end that maintains a session that gets fed to an LLM each time you add a reply, and now has access to image gen, too, so I was wrong.
I beg to differ.
The llm is executing a function on a diffusion image model. The llm does not generate the image itself
This doesn’t contradict what the OP said. ChatGPT is now an interface to both an LLM and a diffusion-based image generator.
You’re being pedantic—and confidently ignorant. The product is called “ChatGPT” and through that you can access multiple models. Like ChatGPT 3.5, or DALL•E.
ChatGPT is just a front-end that maintains a session that gets fed to an LLM each time you add a reply, and now has access to image gen, too, so I was wrong.
Girl on the right probably killed a Spanish swordsmith back in the day.
Yeah, but the model that does the images is actually Dall-e, you are just using gpt’s interface to create them
So, I’m using ChatGPT.
Thank you for agreeing with me.
Sure, sure, was not desagreeing, technically you are using ChatGPT. Just pointing out that the model itself handling the image creation is not chatgpt
Pedantic.
Imbecile.
How so?