Restricting the next predicted token to adhere to a specific context free grammar seems like a big step forward in weaving language models into applications.
Using system prompts provides an intuitive separation for input and output schema from input content.
With the support of GPT-4, I feel unstoppable. The overnight surge in productivity is intoxicating, not for making money or starting a business, but for the sheer joy of continuously creating ideas...
I wrote a few paragraphs disagreeing with Paul's take, asserting that, like Simon suggests, we should think of language models like ChatGPT as a “calculator for words”.