In the previous part of this blog, we explored the limitations of GPT-4. In this post, we will explore if open source models can overcome the limitations of black box models. Specifically, we will consider the use of LLama2 in this scenario.
The llama 2 paper from Meta is very comprehensive.
Llama 2, is a family of LLMs released in three flavours: 7B, 13B and 70B parameters. It consists of two distinct families of models Llama 2 was pretrained on publicly available online data sources and is an updated version of Llama . The fine-tuned model, Llama-2-chat, leverages publicly available instruction datasets and over 1 million human annotations.Llama 2-Chat is optimised for dialogue use cases.
The most important characteristic of the paper is its depth and transparency covering the following aspects:
- Use of safety-specific data annotation,
- Transparency in terms of fine tuning strategies, training corpus, handling personal information, hyperparameter tuning,
- up-sampling factual sources to enhance knowledge and minimize hallucinations.
- Details about Supervised Fine-Tuning (SFT) stage and Reinforcement Learning with Human Feedback (RLHF) RLHF and Reward models.
Safety and transparency is a focus of LLama 2. In this sense, it overcomes the challenges of drift in black box models like GPT. However, there are some more nuances. Currently, llama 2 outperforms other open source LLMs but not the closed source LLMs like GPT. If usage picks up, over time, it could outperform them. Also, because Llama 2 is released in three sizes, it could be more easily deployed within the enterprise making it ideal for regulated industries through on prem deployment. On the other hand, on prem open source deployments are technically more complex. Finally, it depends on the use case and how you use it .. ie if the output of the LLM is directly exposed to the end user.
Image source: drifting sands over time https://pixabay.com/photos/india-desert-sand-pattern-sand-355/