models/sam-2/ #14830
Replies: 33 comments 92 replies
-
Hey everyone! We wanted to give you a quick update: our team is currently busy integrating SAM 2 (Segment Anything Model 2) into the Ultralytics package. This new model is super exciting, with features like real-time performance and the ability to segment objects it’s never seen before. We're really pumped about what SAM2 can do, but it's still in the works. Thanks for your patience as we get everything set up! We'll keep you posted as we make progress. Stay tuned! |
Beta Was this translation helpful? Give feedback.
-
Could you please provide an example of how to perform inference on a video? I would appreciate it if you could include details on the necessary steps and code snippets. Thank you! |
Beta Was this translation helpful? Give feedback.
-
Is it usable with ultralytics package at the moment or should we wait until the implementation is done? |
Beta Was this translation helpful? Give feedback.
-
Hey dear people, |
Beta Was this translation helpful? Give feedback.
-
How can you fine-tune this model for your specific needs? |
Beta Was this translation helpful? Give feedback.
-
when will training option enable for SAM 2 |
Beta Was this translation helpful? Give feedback.
-
Does this support text prompts? For example, I want to search for buildings or cars from an aerial image or video |
Beta Was this translation helpful? Give feedback.
-
Is there an API to fine tune the model on our own labeled photos? also the schema and shape for the inputs |
Beta Was this translation helpful? Give feedback.
-
Thank you very much for your beautiful work! I have a question regarding prompts with several points. When I prompt your integration of SAM 2 with several points I get several masks - one mask for each prompt point (which is what I want to get), while when I do the same thing using SAM 2 directly, I get one mask. Are there certain parameters that need to be used while initializing sam2 or while doing the prediction? |
Beta Was this translation helpful? Give feedback.
-
How can I use this sample code in order to make a detection (with YOLOv8) and segment that detection through a video (more like a real-time video stream)? I have run the code and it worked and I got a txt file which includes labels but I couldn't know what to do next.
|
Beta Was this translation helpful? Give feedback.
-
Hello ! I am trying to test SAM2, but its says "sam2_b.pt is not a supported SAM model. Available models are: from ultralytics import SAM Load a modelmodel = SAM("/backend/model/sam2_b.pt") Display model information (optional)model.info() Run inferencemodel("Input_videos/video.mp4") |
Beta Was this translation helpful? Give feedback.
-
Hey there, I was wondering how can I set foreground and background points and then visualize the segmentation properly? Here is my current code, but it gives each point's segmentation a distinct color.
|
Beta Was this translation helpful? Give feedback.
-
The web page shows the example below, but what options do I have for saving the results using ultralytics built in methods?
|
Beta Was this translation helpful? Give feedback.
-
The SAM and SAM 2 documentation have an identical block called 'SAM comparison vs YOLOv8'. My wish is having updated results in the section, but if that is significant work, then rather remove it. If an update is performed, then enumerating all the SAM2 variants would be nice. |
Beta Was this translation helpful? Give feedback.
-
Hello Glenn, |
Beta Was this translation helpful? Give feedback.
-
Breaking News: 🚨 SAM 1.0 outperforms SAM 2.0 on low-contrast images like CT Scan and MRI images! Use case with ultralytics https://www.youtube.com/watch?v=vMI-TnyNLYU Please do like and subscribe for exciting video like above . #SAM #SAM2.0 #MetaAI #ImageSegmentation #AI #MachineLearning #DeepLearning #DataMask #BreakingNews |
Beta Was this translation helpful? Give feedback.
-
Breaking News: 🚨 SAM 1.0 outperforms SAM 2.0 on low-contrast images like CT Scan and MRI images! Use case with ultralytics https://www.youtube.com/watch?v=vMI-TnyNLYU Please do like and subscribe for exciting video like above . #SAM #SAM2.0 #MetaAI #ImageSegmentation #AI #MachineLearning #DeepLearning #DataMask #BreakingNews |
Beta Was this translation helpful? Give feedback.
-
I understand that SAM2 allows include and exclude operations for multiple points. Is this feature available in ultralytics? If so, could you provide a simple example code? Additionally, I am wondering if the object tracking functionality can be applied to live camera streaming as well. I don't just want image segmentation, but I need to track a specified object in real-time "streaming" video. |
Beta Was this translation helpful? Give feedback.
-
Hi Glenn |
Beta Was this translation helpful? Give feedback.
-
Hi, may I know if the current version of SAM2 support multi-point prompting for one object? Or it can only treat each point prompt as individual object? |
Beta Was this translation helpful? Give feedback.
-
first of all, thanks for the great job. I want to get the detail of the "label" of sam2. is it same as coco, or where can i find the whole label list. |
Beta Was this translation helpful? Give feedback.
-
Hello ! Is it planned to integrate EfficientSAM ? https://github.com/yformer/EfficientSAM |
Beta Was this translation helpful? Give feedback.
-
If I need to use sam2 to segment the objects I want in an image, do I need to train it in yolo for my custom dataset? |
Beta Was this translation helpful? Give feedback.
-
Is there a good way to use SAM 2 as an object tracker while performing object detection with YOLO? I know YOLO has built-in tracking functionality, but I'm having issues because in the video I'm working with, when objects overlap or disappear and reappear, their IDs get changed. Thanks in advance! |
Beta Was this translation helpful? Give feedback.
-
hi, in the following example. I want to track point 900,370. s.t. point coordinates are for frame 34. from ultralytics import SAM
model = SAM("sam2.1_b.pt")
model("path/to/video.mp4", points=[900, 370], labels=[1],
frame_id=[34] # error, not defined
) |
Beta Was this translation helpful? Give feedback.
-
hi @glenn-jocher , In [sam2|https://github.com/facebookresearch/sam2?tab=readme-ov-file] video_predictor_example there's the method i would expect same args, in ultralytics framework, no? |
Beta Was this translation helpful? Give feedback.
-
Hi! |
Beta Was this translation helpful? Give feedback.
-
how to get the labels?
with model.model.names or
with model.names |
Beta Was this translation helpful? Give feedback.
-
Hello! How can I set a mask (black and white image) as a prompt? like Thank you. |
Beta Was this translation helpful? Give feedback.
-
Hello! SAM2 allows adding prompts at different frame indices, not just the first frame in the stream, how can this be done in |
Beta Was this translation helpful? Give feedback.
-
models/sam-2/
Discover SAM 2, the next generation of Meta's Segment Anything Model, supporting real-time promptable segmentation in both images and videos with state-of-the-art performance. Learn about its key features, datasets, and how to use it.
https://docs.ultralytics.com/models/sam-2/
Beta Was this translation helpful? Give feedback.
All reactions