I’m trying to get scapegraph-ai working. I downloaded the repo from here https://github.com/ScrapeGraphAI/Scrapegraph-ai and followed the instructions in the readme. I downloaded Ollama from here https://ollama.com/ and downloaded the LLama3.1 model. When I launch Ollama from CMD everything seems to work correctly. I can use the model using the command ollama run llama3.1. However, I have problems in python. I’m trying to scrape using the scapegraph-ai library using the ollama models locally. I created a simple program in python that allows you to do scaping using scapegraph and taking as input from the prompt the URL and the prompt instructions:
Main.py:
import sys
from scraper import Scraper
def main():
if len(sys.argv) != 3:
print("Command: python main.py <URL> <PROMPT_INSTRUCTIONS>")
sys.exit(1)
scr = Scraper(sys.argv[1], sys.argv[2])
scr.scrape()
if __name__ == "__main__":
main()
Scraper.py:
from scrapegraphai.graphs import SmartScraperGraph
from scrapegraphai.utils import prettify_exec_info
class Scraper:
def __init__(self, url, command):
self.url = url
self.command = command
def scrape(self):
try:
graph_config = {
"llm": {
"model": "ollama/llama3.1",
"temperature": 1,
"format": "json",
"model_tokens": 2000,
"base_url": "http://localhost:11434"
},
"embeddings": {
"model": "ollama/nomic-embed-text",
"temperature": 0,
"base_url": "http://localhost:11434"
}
}
smart_scraper_graph = SmartScraperGraph(
prompt=self.command,
source=self.url,
config=graph_config
)
result = smart_scraper_graph.run()
print(result)
except Exception as e:
print(f"Error: {e}")
But when I launch my python code I get the error:
Model not found, using default token size (8192)
Does anyone know what I’m doing wrong?