Realtime Data Streaming With TCP Socket, Apache Spark, OpenAI LLM, Kafka and Elasticsearch | End-to-End Data Engineering Project
- Introduction
- System Architecture
- What You'll Learn
- Technologies
- Getting Started
- Watch the Video Tutorial
This project serves as a comprehensive guide to building an end-to-end data engineering pipeline using TCP/IP Socket, Apache Spark, OpenAI LLM, Kafka and Elasticsearch. It covers each stage from data acquisition, processing, sentiment analysis with ChatGPT, production to kafka topic and connection to elasticsearch.
The project is designed with the following components:
- Data Source: We use
yelp.com
dataset for our pipeline. - TCP/IP Socket: Used to stream data over the network in chunks
- Apache Spark: For data processing with its master and worker nodes.
- Confluent Kafka: Our cluster on the cloud
- Control Center and Schema Registry: Helps in monitoring and schema management of our Kafka streams.
- Kafka Connect: For connecting to elasticsearch
- Elasticsearch: For indexing and querying
- Setting up data pipeline with TCP/IP
- Real-time data streaming with Apache Kafka
- Data processing techniques with Apache Spark
- Realtime sentiment analysis with OpenAI ChatGPT
- Synchronising data from kafka to elasticsearch
- Indexing and Querying data on elasticsearch
- Python
- TCP/IP
- Confluent Kafka
- Apache Spark
- Docker
- Elasticsearch
-
Clone the repository:
git clone https://github.com/airscholar/E2EDataEngineering.git
-
Navigate to the project directory:
cd E2EDataEngineering
-
Run Docker Compose to spin up the spark cluster:
docker-compose up
For more detailed instructions, please check out the video tutorial linked below.
For a complete walkthrough and practical demonstration, check out the video here: