Marcelo Lewin
OpenAI Releases GPT-4 + GPT-4 Demo Live Stream Recap

OpenAI Releases GPT-4
The day has finally come. GPT-4 has been released and it promises to be a huge step forward in multi-model LLMs. To learn more about it, just check out OpenAI's links below.
GPT-4 Developer Demo Live Stream Recap
OpenAI had a live demo stream to showcase GPT-4. The most impressive thing for me was the hand drawn web page design that was converted into code. See below for more details about the presentation.
The presenter showed a task that GPT-4 can do that GPT-3 couldn't do, which was to summarize an article into a sentence where every word begins with a G. Then he did the same with an A and finally with a Q. GPT-3 could not do it but GPT-4 could.
He then showed how image recognization worked by submitting an image to GPT-4 (in Discord) and asking it to describe it in great detail. He then submitted another image and asked it "what is funny about this image". In both instances, it came back correctly.
He then took a photo (using his mobile phone) of a webpage design he drew by hand on a piece of paper. He then sent that image to GPT-4 (in discord) and asked it to create HTML and javascript code for it. That worked really well. That was very impressive.
Finally, he pasted in a tax code (16 pages of tax code....boring...). He then asked GPT4 a question about a couple that was married and asked what their tax liability was. It provided the correct answer
He ended the stream with having GPT-4 write a poem about the tax liability. It wrote it.
There you have it. GPT-4 is out and we all still have our jobs and have not been replaced by AI (yet!)