One of WobblePic’s standout features is its ability to isolate individual objects within an image so they can wobble independently. Behind this capability is SAM2 (Segment Anything Model 2), Meta’s powerful AI segmentation model. In this article, we’ll explore what SAM2 is, how WobblePic integrates it locally on your machine, and why it transforms the wobble experience from a fun novelty into something genuinely impressive.

What Is SAM2?

SAM2, short for Segment Anything Model 2, is the second generation of Meta’s foundation model for image segmentation. Released as an open-source project, SAM2 can identify and separate virtually any object in an image — people, animals, food, furniture, abstract shapes — with remarkable accuracy.

What makes SAM2 special compared to traditional segmentation methods is its promptable nature. Instead of requiring carefully labeled training data for every type of object, SAM2 accepts simple prompts like a single click or a bounding box. You point at something, and SAM2 figures out where that object begins and ends. It understands object boundaries at a level that classical computer vision techniques simply cannot match.

SAM2 builds on the original SAM model with improved accuracy, better handling of ambiguous boundaries, and more efficient architecture. It excels at understanding complex scenes where objects overlap, cast shadows, or blend into similar backgrounds.

How WobblePic Integrates SAM2 Locally

A key design decision in WobblePic is that all AI processing happens locally on your machine. Your images never leave your computer — there are no cloud API calls, no uploads, and no privacy concerns.

This local execution is made possible by two technologies:

ONNX Runtime

WobblePic uses ONNX Runtime to run the SAM2 model. ONNX (Open Neural Network Exchange) is an open standard format for representing machine learning models. By converting SAM2 to the ONNX format, WobblePic can run the model efficiently without requiring a full PyTorch or TensorFlow installation.

ONNX Runtime provides a lightweight, high-performance inference engine that loads the model and executes it with minimal overhead. This means WobblePic stays small and doesn’t need a heavy machine learning framework installed on your system.

DirectML for GPU Acceleration

To keep segmentation fast and responsive, WobblePic leverages DirectML (Direct Machine Learning) as the execution provider for ONNX Runtime. DirectML is a hardware-accelerated machine learning API built on top of DirectX 12 that works across a wide range of GPU hardware.

The advantage of DirectML is broad compatibility. Whether you have an NVIDIA, AMD, or Intel GPU, DirectML can accelerate the segmentation process. This is different from CUDA, which only works on NVIDIA hardware. By choosing DirectML, WobblePic ensures that GPU-accelerated segmentation is available to as many Windows users as possible.

On a typical modern GPU, SAM2 inference through ONNX Runtime with DirectML completes in just a few seconds — fast enough to feel interactive rather than disruptive to your workflow.

The Segmentation Workflow

Using AI segmentation in WobblePic is designed to be intuitive. Here’s how the process works from click to wobble:

Step 1: Click to Segment

With segmentation mode activated, you simply click on the object you want to isolate. This click serves as a point prompt for SAM2 — it tells the model “I’m interested in the object at this location.”

You can also click on background areas to provide negative prompts, helping SAM2 understand what you do not want included in the selection. This is useful when objects are close together or when the initial segmentation picks up more than intended.

Step 2: Mask Generation

SAM2 processes the image along with your click coordinates and produces a segmentation mask — a pixel-level map that identifies exactly which pixels belong to the selected object and which belong to the background.

The model actually generates multiple candidate masks ranked by confidence. WobblePic selects the most appropriate mask, typically the one that best matches the scale implied by your click position. If you click near the center of a large object, it tends to select the full object. If you click near a small detail, it focuses on that specific part.

Step 3: Separate Mesh Creation

Once the mask is generated, WobblePic creates a separate physics mesh for the segmented region. This is where the magic happens — instead of the entire image sharing a single mesh, the segmented object gets its own independent mesh with its own mass points and springs.

The mesh generation process traces the boundary of the segmentation mask, creates vertices along that boundary, and fills the interior with a triangulated grid. The boundary vertices are handled carefully to ensure a clean visual separation between the object and its surroundings.

Step 4: Independent Wobble

With separate meshes in place, each segmented region responds to physics independently. When you drag and release the segmented object, it wobbles on its own while the background remains still (or wobbles separately if you interact with it too).

This independence creates much more satisfying and realistic wobble effects. A cat’s face can jiggle while the sofa it sits on stays put. A slice of cake can bounce while the plate beneath it remains steady. The physics feel more believable because they match how objects actually behave — as separate physical entities.

Why Segmentation Transforms the Wobble Experience

Without segmentation, wobbling an image means distorting the entire rectangular frame. This can look fun, but it also looks artificial — real objects don’t deform uniformly with their backgrounds.

With segmentation, the wobble becomes object-aware. The physics apply to the shape of the actual object, not the rectangular frame. Rounded objects wobble in a round way. Elongated objects sway differently. The result looks and feels dramatically more natural and entertaining.

This is particularly effective with certain types of images:

  • Food photography: Segmenting a dessert or a bowl of ramen lets the food wobble like jelly while the table stays firm
  • Animal photos: Isolating a pet’s face or body creates adorable wobble effects that follow the animal’s natural shape
  • Product images: Objects against clean backgrounds segment beautifully, creating satisfying wobble animations

Technical Considerations

Running a model like SAM2 locally does come with some requirements. The model weights take up disk space, and inference requires a GPU with reasonable capabilities. WobblePic handles this gracefully — segmentation is an optional feature, and the application works perfectly well as a standard wobble viewer without it.

The initial model loading takes a moment the first time you use segmentation in a session, as the ONNX Runtime needs to load the model weights into GPU memory. Subsequent segmentation operations within the same session are much faster since the model stays loaded.

Conclusion

The combination of SAM2’s powerful segmentation capabilities with WobblePic’s physics simulation creates an experience that’s greater than the sum of its parts. By running everything locally through ONNX Runtime and DirectML, WobblePic delivers AI-powered segmentation that’s fast, private, and broadly compatible with Windows hardware.

Ready to see AI segmentation in action? Check out the tutorial for a step-by-step guide, explore the gallery for examples of segmented wobble effects, or download WobblePic and try it yourself.