Showing posts with label ollama. Show all posts
Showing posts with label ollama. Show all posts

Thursday, February 6, 2025

Saturday, December 7, 2024

Control LLM's Output with Ollama Structured Outputs

 This video shows how to use Ollama to constrain the LLM output to a structured format locally.

Saturday, November 30, 2024

Create a Free Local AI Dungeon Game with Ollama

  This video shows how to create a dungeon and dragon game with help of local models with Ollama easily.

Saturday, August 31, 2024

Install RAG Me Up with Ollama Locally - Free RAG with Any Dataset

 This video shows how to install and use RAG Me Up which is a generic framework (server + UIs) that enables you do to RAG on your own dataset.

Friday, August 23, 2024

Easy Tutorial to Build Full Free RAG Pipeline from Scratch with Your Own Data

 This video shows how to install Haystack with Ollama locally for free end-to-end RAG pipeline with your own documents.

Saturday, August 17, 2024

Free LLM Dataset Creation with Ollama Locally - Easy Tutorial

 This video is a step-by-step tutorial to create your own custom dataset from your database schema locally with free model from Ollama.

Monday, August 5, 2024

Mem0 with Ollama Locally - Memory Layer for Personalized AI

 This video is a step-by-step easy tutorial to install Mem0 locally and integrate it with Ollama local model.


Code:

conda create -n mem python=3.11 -y && conda activate mem

pip install torch
pip install -U transformers sentencepiece accelerate
pip install sentence_transformers
pip install ollama
pip install mem0ai

import os
from mem0 import Memory

os.environ["OPENAI_API_KEY"] = ""

config = {
    "llm": {
        "provider": "ollama",
        "config": {
            "model": "llama3.1:latest",
            "temperature": 0.1,
            "max_tokens": 2000,
        }
    }
}

m = Memory.from_config(config)
m.add("Likes to play cricket on weekends", user_id="alice", metadata={"category": "hobbies"})



# Get all memories
all_memories = m.get_all()
print(all_memories)

# Get a single memory by ID
specific_memory = m.get("59565340-c742-4e09-8128-702e810cb4fd")
print(specific_memory)

related_memories = m.search(query="alice hobbies?", user_id="alice")
print(related_memories)

result = m.update(memory_id="59565340-c742-4e09-8128-702e810cb4fd", data="Visited Brisbane in Winter")
print(result)

m.delete(memory_id="59565340-c742-4e09-8128-702e810cb4fd") # Delete a memory

m.delete_all(user_id="alice") # Delete all memories

all_memories = m.get_all()
print(all_memories)

Wednesday, July 31, 2024

Tuesday, July 30, 2024

Get Llama 3.1 70B-Level AI Quality from 8B with Ollama Locally for Free

 This video is a step-by-step easy tutorial to get quality of Llama 3.1 70B from Llama 3.1 8B with Ollama locally. It's inspired by Matt Shumer GPT Prompt Engineer.

Sunday, July 28, 2024

Step-by-Step Guide to Create Free Dataset with Ollama and Llama 3.1 Locally

 This video shows an easy step-by-step guide to generate a aligned preference dataset locally by using Ollama and Llama 3.1 70B model.

Wednesday, July 24, 2024

Run Llama 3.1 with Ollama and Google Colab for Free Using AdalFlow

 This video shows hands-on tutorial as how to run Llama 3.1 8B model with Ollama on free Google colab with AdalFlow.

Sunday, July 21, 2024

RAG Pipeline Tutorial Using Ollama, Triplex, and LangChain On Custom Data Locally

 This video is a step-by-step guide on building an end-to-end RAG pipeline on your own custom data locally by using Ollama models Triplex and Langchain with GUI in Gradio.

Monday, July 8, 2024

Run LoRA Adapter of Any Model with Ollama

This video is a step-by-step tutorial to create integrate LoRA adapters of models in Ollama.

Sunday, July 7, 2024

Install Microsoft GraphRAG with Ollama Locally

 This video is a step-by-step tutorial to install Microsoft GraphRAG with Ollama models with your own data.

Friday, July 5, 2024

RouteLLM - Create LLM Routers Locally

 This video installs RouteLLM locally which is a framework for serving and evaluating LLM routers. It als shows hands-on demo of routing model traffic between Ollama models and OpenAI models.

Tuesday, July 2, 2024

Install Polars DataFrame Library and with Local LLMs Using Ollama

 This video installs Polars and demonstrates how to integrate with Ollama local models to do data analysis.

Sunday, June 23, 2024

Create Local RAG Pipelines with R2R and Ollama for Free

 This video installs R2R (Rag to Riches) with Ollama and local models which is the ultimate open-source framework for building and deploying high-quality Retrieval-Augmented Generation (RAG) systems.

Thursday, June 20, 2024

Create a Local AI Agent with Ollama and Langchain Easily - Tutorial

 This video is a step-by-step tutorial to create an agentic application with Langchain and Ollama locally with function calling.

Monday, June 17, 2024

Convert Text to SQL with Ollama and Vanna with Any Database - Local and Free

 

This video is a step-by-step tutorial to install Vanna.ai locally with Ollama and with your own custom database to chat with database and convert text to SQL using AI.

Tuesday, June 4, 2024

Integrate Local LLMs with Mobile and Web Apps

 This video installs Firebase Genkit locally which is a framework with powerful tooling to help app developers build, test, deploy, and monitor AI-powered features.