This project allows users to generate and manipulate images based on textual and visual inputs. The pipeline leverages models such as ChatGPT for prompt generation and Stable Diffusion for image inpainting and generation. The core steps involve taking user inputs, generating prompts, and producing final images.
- User Input: The pipeline accepts two types of inputs from users:
- Text Input: User-provided text describing the desired image.
- Image Input: User-provided image to be used as a reference.
-
Image to Text Model: Uses Salesforce's blip-image-captioning-large model to convert image inputs into text descriptions.
-
ChatGPT API: Utilizes OpenAI's gpt-3.5-turbo-0125 to process user prompts and generate detailed prompts for the image generation model.
-
Database: Stores project information and image data, including user prompts, generated prompts, and images.
-
Stable Diffusion Model: Uses StabilityAI's stable-diffusion-2-inpainting for generating and inpainting images based on the processed prompts.
-
Producer and Consumer:
- Producer: Handles the image generation and manipulation process.
- Consumer: Delivers the final images to the user.
git clone https://github.com/TailUFPB/ZoomAI
cd ZoomAI
pip install -r requirements.txt