PinnedEduardo MuñozinTowards AIFine-Tuning a Llama-2 7B Model for Python Code GenerationA demo on how to fine-tune the new Llama-2 using PEFT, QLoRa, and the Huggingface utilitiesAug 10, 20237Aug 10, 20237
PinnedEduardo MuñozinAnalytics VidhyaEmbeddings with Sentence Transformers and Pinecone for Question Answering in SpanishA simple how-to guide on using a vector database, semantic search and sentence-transformer for a question-answering taskJul 18, 2023Jul 18, 2023
PinnedEduardo MuñozinTowards Data ScienceAttention is all you need: Discovering the Transformer paperDetailed implementation of a Transformer model in TensorflowNov 2, 20208Nov 2, 20208
Eduardo MuñozinTowards AILlamaindex Query Pipelines: Quickstart Guide to the Declarative Query APIA quick introduction and a how-to guide for some use casesFeb 8Feb 8
Eduardo MuñozinTowards AIFast and Efficient Model Finetuning using the Unsloth LibraryFinetuning a Llama-2 model up to 2x faster for a code generation taskJan 9Jan 9
Eduardo MuñozinTowards AIReST meets ReACT: improving ReAct with Self-Critique, AI Feedback, and Synthetic Data Generation.A brief description of this adaptation of Reinforced Self-Training (ReST) to an agentic configuration.Dec 29, 2023Dec 29, 2023
Eduardo MuñozinTowards AIDense X Retrieval Technique in Langchain and LlamaIndexA summary of this new approach to dense retrieval based on decomposed propositions as retrieval units.Dec 19, 2023Dec 19, 2023
Eduardo MuñozinTowards AIGPTQ Quantization on a Llama 2 7B Fine-Tuned Model With HuggingFaceA how-to easy-following guide on quantizing an LLMSep 7, 20232Sep 7, 20232
Eduardo MuñozinAnalytics VidhyaFine-tune a RoBERTa Encoder-Decoder model trained on MLM for Text GenerationPart 2 of a model to generate names from product descriptionsOct 4, 2021Oct 4, 2021
Eduardo MuñozinAnalytics VidhyaCreate a Tokenizer and Train a Huggingface RoBERTa model from scratchPart 1: A product names generator using an Encoder Decoder TransformerAug 16, 20212Aug 16, 20212