The basic idea here starts with a question: How well can GPT3 trained on the Feynman Lectures answer physics questions? GPT3 (Generative Pre-trained Transformer 3) is OpenAI’s new autoregressive language model for natural language processing. This model has shown significant improvements in (see examples) the most basic tasks such as summarization, answering questions, and comprehension. What makes GPT3 so impressive is that it’s a generalized neural network with billions of parameters, and its success has only been rivaled with narrow focused neural networks. Here are some applications from beta testers.