in

Fine-tuning Wikipedia Datasets for improved accuracy and performance #NLP

Fine-tuning on Wikipedia Datasets

The content provides information on accessing complete scripts and future improvements through Trelis, as well as one-click fine-tuning and LLM templates. Trelis Livestreams are available on Thursdays at 5 pm Irish time on YouTube. The newsletter and resources/support/discord can be found on the Trelis website. Video resources include slides, datasets, and WikiExtractor. The timestamps outline the topics covered in the video, such as fine-tuning Llama 3 for a low resource language, creating a HuggingFace dataset with WikiExtractor, fine-tuning setup with LoRA, dataset blending to avoid catastrophic forgetting, trainer setup and parameter selection, inspection of losses and results, learning rates and annealing, and further tips and improvements. This content is a valuable resource for those interested in fine-tuning language models and working with low resource languages.

Source link

Source link: https://www.youtube.com/watch?v=bo49U3iC7qY

What do you think?

Leave a Reply

GIPHY App Key not set. Please check settings

RISE OF AI IN EVERYDAY LIFE. It is very well visible and… | by Papiya Sinha! the unusual in the USUAL world | May, 2024

AI’s Growing Influence in Daily Life: A Revolution Unfolds #AIRevolution

Meta AI chief doubts large language models will surpass humans #AI