General-Purpose Usage

While PaperVision is primarily designed for FTC robotics, the code it generates—a standard Java class extending OpenCvPipeline—can be used in any desktop, server, or Java Virtual Machine (JVM) application. The best way to run your pipelines in a general-purpose environment is by using the VisionLoop library.

Integrating VisionLoop with Gradle

1. Add the Repository

You first need to tell Gradle where to find the VisionLoop library, which is hosted on Maven Central. Add the mavenCentral() repository to your repositories block.

Gradle

repositories {
    mavenCentral()
    // Add any other repositories you use (e.g., Google, JitPack)
}

2. Add the Dependencies

Next, add the core VisionLoop library and any optional modules (like the streaming module for web output) to your dependencies block.

The core library is necessary, and you often need the streaming module if you plan to view the output remotely over a web browser.

dependencies {
    // Core VisionLoop Library
    implementation 'org.deltacv.visionloop:visionloop:x.y.z' 
    
    // Optional: Add the streaming module for MJPEG output (web browser)
    implementation 'org.deltacv.visionloop:streaming:x.y.z' 
    
    // Add your other dependencies here (e.g., Kotlin, JUnit)
}

The version number (x.y.z in the examples above) is constantly updated. You must check the official VisionLoop GitHub repository (http://github.com/deltacv/visionloop) to find the current, stable version number to use for both dependencies.

Integrating Your Pipeline

The core steps to use your PaperVision-generated pipeline outside of FTC involve setting up a VisionLoop configuration:

1. Set the Input Source

First, you tell VisionLoop where to get its images. Unlike FTC, where the source is the robot's camera, here you can specify a webcam index, a file path, or an image resource.

2. Add Your Pipeline as a Processor

Next, you insert your generated pipeline class into the VisionLoop chain using the .then() method. This instructs VisionLoop to pass every frame it receives through your custom logic.

3. Display the Output

Finally, you tell VisionLoop how to show the processed image. You can open a live display window on your computer or stream the results to a web browser.

Example: Running with a Webcam

This code snippet shows how to quickly create a VisionLoop, attach your generated MyPaperVisionPipeline class, and display the result in a window on your computer:

import io.github.deltacv.visionloop.VisionLoop;

public class DesktopVisionApp {
    public static void main(String[] args) {
        // Assume 'MyPaperVisionPipeline' is the class generated by PaperVision
        MyPaperVisionPipeline pipeline = new MyPaperVisionPipeline(); 
        
        // Build the vision loop using a fluent interface
        var loop = VisionLoop.withWebcamIndex(0) // 1. Set the input source (Webcam 0)
            .then(pipeline)                   // 2. Add your custom pipeline as a processor
            .withLiveView()                   // 3. Open a live window on the computer
            .build();
            
        // Run the vision loop on the current thread
        loop.runBlocking(); 
        
        // Alternatively, use loop.toAsync().run() to run in the background
    }
}

Last updated