Vicuna-13B - an open-source chatbot trained by fine-tuning LLaMA on ~70K user-shared ChatGPT conversations.
Claims to achieve "more than 90%* quality of OpenAI ChatGPT and Google Bard while outperforming other models like LLaMA and Stanford Alpaca in more than 90%* of cases".
Seems possible to run it on your own machine with a single GPU.
Just gave it a quick try and the results look impressive compared to previous open chatbots I have tested. Didn't expect explanations that feel like ChatGPT. Interesting!
How about math problem-solving? It does get the final answers correct. Need to take a closer look at the explanations/traces for these types of questions because these can be difficult for LLMs to get right.