I’ve been working on a project to a create a chatbot like app using langchain and streamlit for UI. but whenever I’m trying to run the app I keep getting this error:
“(venv) D:Gen AI projsCode Analyzer>python main.py
Starting the AI Code Analyzer…
INFO:root:Starting Streamlit app…
You can now view your Streamlit app in your browser.
Local URL: http://localhost:8501
Network URL: http://192.168.1.10:8501
2024-12-25 15:26:00.496 Uncaught app execution
Traceback (most recent call last):
File “D:Gen AI projsCode AnalyzervenvLibsite->packagesstreamlitruntimescriptrunnerexec_code.py”, line 88, in >exec_func_with_error_handling
result = func()
^^^^^^
File “D:Gen AI projsCode AnalyzervenvLibsite->packagesstreamlitruntimescriptrunnerscript_runner.py”, line 579, in code_to_exec
exec(code, module.dict)
File “D:Gen AI projsCode Analyzeruistreamlit_ui.py”, line 4, in
from app.model_loader import load_model
ModuleNotFoundError: No module named ‘app’
my directory structure looks like this:
and the code in each file is as follows:
- main.py:
import os
import sys
import subprocess
import logging
# Add project root to PYTHONPATH
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
logging.basicConfig(level=logging.INFO)
def start_streamlit():
"""Starts the Streamlit application."""
try:
logging.info("Starting Streamlit app...")
subprocess.run(["streamlit", "run", "ui/streamlit_ui.py"])
except Exception as e:
logging.error(f"Error starting Streamlit app: {e}")
if __name__ == "__main__":
print("Starting the AI Code Analyzer...")
start_streamlit()
ui/streamlit_ui.py:
import streamlit as st
import os
import sys
from app.model_loader import load_model
from app.analyzer import analyze_code
print("Current working directory:", os.getcwd())
# Dynamically add the root directory to PYTHONPATH
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
print("SRC Directory exists:", os.path.exists(os.path.join(os.path.dirname(__file__), "..", "src")))
# Load the model once when the app starts
st.title("AI Code Analyzer")
tokenizer, model = load_model()
# User inputs code for analysis
user_code = st.text_area("Enter your code:", height=300)
if st.button("Analyze Code"):
if not user_code.strip():
st.error("Please enter some code to analyze.")
else:
with st.spinner("Analyzing code..."):
try:
result = analyze_code(user_code, tokenizer, model)
st.success("Analysis Complete!")
st.text_area("Analysis Result:", value=result, height=300, disabled=True)
except Exception as e:
st.error(f"Analysis failed: {e}")
app/model_loader.py:
from transformers import AutoTokenizer, AutoModelForCausalLM
def load_model():
"""Loads the StarCoder model and tokenizer."""
model_name = "bigcode/starcoderbase"
try:
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype="float16")
except Exception as e:
raise RuntimeError(f"Failed to load model: {e}")
return tokenizer, model
app/analyzer.py:
import langdetect # You may need to install this package
def analyze_code(code: str, tokenizer, model, language: str = None) -> str:
"""
Analyze the code using the model for errors, security vulnerabilities, and optimizations.
The function generates a detailed analysis of the code in bullet points.
"""
# Detect language if not provided
if not language:
language = langdetect.detect(code)
# If a language is provided, include it in the prompt
language_info = f" in {language}" if language else ""
input_prompt = (
f"Analyze the following code{language_info}:nn"
f"{code}nn"
f"1. Detect syntax or logical errors.n"
f"2. Identify security vulnerabilities.n"
f"3. Suggest optimizations or best practices.n"
f"Provide detailed feedback in bullet points."
)
try:
# Tokenizing the input prompt
inputs = tokenizer(input_prompt, return_tensors="pt")
# Generating output using the model
outputs = model.generate(inputs.input_ids, max_length=512, num_return_sequences=1)
except Exception as e:
raise RuntimeError(f"Error during analysis: {e}")
# Decoding the response
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
return response
requirements.txt:
streamlit
langchain
transformers
torch
langdetect
Please help me with this problem, Thank you so much!
0
I looked at streamlit community page to look for the solution but they kept saying to add requirements.txt file in your project, as you can see I already have it in my project and I’ve already pip installed everything before trying to run the project.
This is not a streamlit
dependency problem, but a working directory problem. When you start the main.py
, you are looking at the whole project. However, when you run the subprocess from the main file, the python environment that allows you to run the streamlit_ui
python scripts is changed.
There are some solutions you can adopt:
- You can run the application using
PYTHONPATH=.
:<code>PYTHONPATH=. python main.py</code><code>PYTHONPATH=. python main.py </code>PYTHONPATH=. python main.py
Assuming you are in the root of the project.
PYTHONPATH
is an environment variable used by the Python interpreter to determine where to locate the module files imported into a program. It’s part of the larger concept of environment variables, which provide a way to influence the behavior of software on a computer. - You can add the environment code-side using the getcwd function:
<code>os.environ["PYTHONPATH"] = os.getcwd()</code><code>os.environ["PYTHONPATH"] = os.getcwd() </code>
os.environ["PYTHONPATH"] = os.getcwd()
Finally, I suggest that you use the check=True
parameter in the subprocess.run()
function to ensure that an error is raised if the streamlit command fails. If the command exits with a non-zero status, subprocess.CalledProcessError
will be thrown.
def start_streamlit():
"""Starts the Streamlit application."""
try:
logging.info("Starting Streamlit app...")
# (one of the possible solutions)
# Setting PYTHONPATH to the current working directory
os.environ["PYTHONPATH"] = os.getcwd()
subprocess.run(["streamlit", "run", "ui/streamlit_ui.py"], check=True)
except subprocess.CalledProcessError as e:
# Logging the specific error if the Streamlit command fails
logging.error(f"Streamlit command failed with error: {e}")
except Exception as e:
# Logging any other errors that might occur
logging.error(f"Error starting Streamlit app: {e}")
subprocess.CalledProcessError
if the streamlit command fails, this exception will be caught, and the error message will be logged.Exception
a more generic catch-all for any other exceptions that might occur, ensuring they are also logged.
See more here:
- subprocess.run
- subprocess.CalledProcessError