Overview
Do you want to build an AI assistant that could provide clear, expert-level answers to your questions? Well, that’s exactly what we’re building today!
In this blog post, we will develop a customized AI Q&A app using the DeepSeek LLM model, LangChain, and Streamlit. We will explain how to develop an AI assistant using DeepSeek Llama 70B to deliver high-quality, professional responses. Let’s get started with the exciting journey.
Introduction
In this section, we will briefly discuss, DeepSeek, LangChain, and Streamlit. A brief idea of these tools will give us a solid foundation for creating an AI-powered Q&A app.
What is DeepSeek?
DeepSeek is a free AI-powered chatbot that looks and functions like ChatGPT, handling similar tasks with great accuracy.
It is as powerful as OpenAI’s o1 model in math and coding. Compared to OpenAI’s o1 model, it uses less memory and is more cost-efficient. The development cost of DeepSeek is just around $6 million compared to $100 million used by OpenAI to create GPT-4. Moreover, DeepSeek is free; people can utilize its advanced capabilities without paying for a subscription.
What is LangChain?
Langchain is a very popular open-source framework for developing LLM-powered AI applications. With the help of Langchain, we can easily connect LLM models (such as DeepSeek or OpenAI) to various data sources, integrate memory for context retention, and enable advanced reasoning capabilities.
Using Langchain, we can also create AI agents that can retrieve real-time data, call APIs, and even perform reasoning tasks—all with minimal effort. It’s widely used for chatbots, search applications, document analysis, and automation.
What is Streamlit?
Streamlit is an open-source Python framework by which we can quickly build interactive web apps for machine learning and data science. We can quickly transform our Python scripts into user-friendly applications in just a few lines of code using Streamlit. It helps to bring ideas to life, visualize model outputs, create dashboards and build AI-powered tools with minimum effort.
Streamlit is designed for speed and simplicity so that we can focus more on our AI models and data while Streamlit will take care of the UI.
Practical Implementation: Building the AI-Powered Q&A App
In this section, we will develop an AI-powered Q&A app in a step-by-step manner. We will use the DeepSeek model for natural language processing, LangChain to link together several AI components, and Streamlit to build an interactive web interface.
The flowchart below may help you to understand the code better.

Fig.1: Flow of AI-Powered Q&A App with LangChain and DeepSeek
Importing Required Libraries
The first step is to import the required libraries to develop the model. We will import the following libraries as shown below.
import os
from dotenv import load_dotenv
import streamlit as st
from langchain_core.prompts import ChatPromptTemplate
from langchain_groq import ChatGroq
Explanation:
os
: This module provides a way to interact with the operating system, such as reading environment variables.dotenv
: This library is used to load environment variables from a.env
file.streamlit
: A framework used to create web applications with Python. It’s particularly useful for building data-driven apps.langchain_core.prompts
: This module provides tools to create and manage prompts for language models.langchain_groq
: This module allows interaction with the Groq API, which is used to access large language models.
Loading Environment Variables
Now load the environmental variables from the .env file.
load_dotenv()
groq_api_key = os.getenv("GROQ_API_KEY")
Explanation
load_dotenv()
: Loads environment variables from a.env
file into the environment.os.getenv("GROQ_API_KEY")
: Retrieves the value of theGROQ_API_KEY
environment variable, which is necessary to authenticate with the Groq API.
Function to Load External CSS
The function below will be used to read a CSS file required to make the streamlit app visually appealing.
def load_css(file_name):
with open(file_name, "r") as f:
st.markdown(f"", unsafe_allow_html=True)
Explanation:
load_css(file_name)
: This function reads a CSS file and injects its content into the Streamlit app usingst.markdown
. Theunsafe_allow_html=True
parameter allows HTML content to be rendered.
Loading CSS File
load_css("styles.css")
Explanation:
load_css("styles.css")
: Calls theload_css
function to load and apply the styles fromstyles.css
to the Streamlit app.
Sidebar for Extra Features
st.sidebar.title("ℹ️ Instructions")
st.sidebar.markdown("""
- Enter your question in the input box.
- The AI will generate an **expert-level** response.
- Uses **DeepSeek Llama 70B** model via Groq.
""")
Explanation:
st.sidebar.title("ℹ️ Instructions")
: Adds a title to the sidebar.st.sidebar.markdown(...)
: Adds markdown content to the sidebar, providing instructions to the user
Prompt Template
prompt = ChatPromptTemplate.from_messages([
("system", "You are an AI expert specializing in deep learning. Provide **detailed, structured, and well-researched responses** in a clear and professional manner."),
("user", "Question: {question}")
])
Explanation:
ChatPromptTemplate.from_messages(...)
: Creates a prompt template with two messages:A system message that sets the context for the AI, instructing it to act as an expert in deep learning.
A user message that will be replaced with the user’s question.
Setting Up the UI
st.markdown('💬 AI-Powered Q&A App
', unsafe_allow_html=True)
input_text = st.text_input("🔍 Ask a question:")
Explanation:
st.markdown(...)
: Adds a styled title to the main page using HTML and CSS.st.text_input("🔍 Ask a question:")
: Creates a text input box where users can type their questions.
Configuring the Language Model (LLM)
llm = ChatGroq(groq_api_key=groq_api_key, model_name="deepseek-r1-distill-llama-70b")
Explanation:
ChatGroq(...)
: Initializes the Groq language model with the API key and specifies the model to be used (deepseek-r1-distill-llama-70b in this case).
Generating and Displaying Output
if input_text:
response_text = llm.invoke(prompt.format(question=input_text)).content # Extract only text response
st.markdown(f'{response_text}', unsafe_allow_html=True)
Explanation:
if input_text:
: Checks if the user has entered a question.llm.invoke(prompt.format(question=input_text))
: Formats the prompt with the user’s question and sends it to the Groq model for processing..content
: Extracts the text content from the model’s response.st.markdown(...)
: Displays the model’s response in a styleddiv
using HTML and CSS.
Visualizing The Results
The figure below shows AI-generated response for a user query. As you can see, the user query is “What is DeepSeek?”. The response to the user query is generated using the DeepSeek Llama 70B model.
Conclusions
In the article, we discussed how to develop an AI-powered Q&A app using DeepSeek, LangChain, and Streamlit. This project shows how modern AI tools can help create intelligent, user-friendly, and cost-effective applications with minimal effort.
You may be a beginner or an experienced developer, but this article can empower you to bring your innovative ideas to life quickly and efficiently. There are endless possibilities with these tools, and the future of AI-driven applications is bright.
References And Further Readings
Frequently Asked Questions
What is DeepSeek?
DeepSeek is a free AI Chatbot for tasks like math and coding. It is as powerful as OpenAI’s o1 model, uses less memory, and is more cost-efficient. The development cost of DeepSeek is just around $6 million compared to the $100 million used by OpenAI to create GPT-4.
What is LangChain?
LangChain is an open-source framework that connects the large language models with data sources. We can quickly build innovative applications with easy integration and memory for context using Langchain.
How does Streamlit help with AI apps?
By using Streamlit, we can quickly build Python scripts into interactive web apps. It helps us to focus solely on our AI models while it takes care of the interface.
Dr. Partha Majumder is a distinguished researcher specializing in deep learning, artificial intelligence, and AI-driven groundwater modeling. With a prolific track record, his work has been featured in numerous prestigious international journals and conferences. Detailed information about his research can be found on his ResearchGate profile. In addition to his academic achievements, Dr. Majumder is the founder of Paravision Lab, a pioneering startup at the forefront of AI innovation.