Tuesday, July 25, 2023

FreeWilly Model Installation Locally Using Petals

This is a step by step guide as how to install FreeWilly1 or FreeWilly2 models locally on AWS using Petals.




Commands Used:

%pip install petals import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline from petals import AutoDistributedModelForCausalLM model_name = "stabilityai/FreeWilly2" !huggingface-cli login --token <Your token> tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) model = AutoModelForCausalLM.from_pretrained("stabilityai/FreeWilly2", torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto") system_prompt = "### System:\nYou are Free Willy, an AI that follows instructions extremely well. Help as much as you can. Remember, be safe, and don't do anything illegal.\n\n" message = "What is the capital of Tonga?" prompt = f"{system_prompt}### User: {message}\n\n### Assistant:\n" inputs = tokenizer(prompt, return_tensors="pt").to("cuda") output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=256) print(tokenizer.decode(output[0], skip_special_tokens=True))

No comments: