Skip to content

VirtualFlyBrain/imageFileConvertion

Repository files navigation

VFB Image File Conversion to Neuroglancer Precomputed Format

Overview

This repository contains scripts and documentation for converting Virtual Fly Brain (VFB) image files into Neuroglancer precomputed format for use in Neuroglancer/Neuroglass viewers.

VFB image data lives at https://www.virtualflybrain.org/data/VFB/i/{first4}/{last4}/{template_id}/ and contains several file types per image:

File Format Description
volume.nrrd NRRD 3D volumetric voxel data
volume_man.obj OBJ mesh Manually curated or generated mesh (vertices + faces)
volume.obj OBJ point cloud Auto-generated point cloud (vertices only, no faces)
volume.swc SWC Neuron skeleton/morphology tracing
volume.wlz Woolz Format for the IIP3D slice viewer

VFB Image Categories & Conversion Strategies

There are four distinct categories of VFB images, each requiring a different conversion approach:


Category A: Templates with Painted Domains (Index Regions)

What they are: Reference brain templates (e.g., JRC2018Unisex VFB_00101567) that contain a segmentation label field — an integer-indexed NRRD volume where each voxel value maps to a specific anatomical region (neuropil).

Example data structure:

  • Template VFB_00101567 has 46 painted domains
  • Voxel value 0 = background/template, 3 = medulla (ME), 7 = calyx (CA), 16 = fan-shaped body (FB), etc.
  • Each domain has its own VFB ID (e.g., VFB_00102107 for medulla)
  • Domain metadata includes: label, FBbt ontology ID, index value, type

Conversion approach:

  1. Download the template NRRD (this is the full label field with all domains)
  2. Convert NRRD → precomputed segmentation format (each voxel value = segment ID)
  3. Generate meshes for each segment using marching cubes
  4. Map segment IDs to VFB domain labels/ontology terms

Script: convert_templates.py

python convert_templates.py \
  --template-id VFB_00101567 \
  --output-dir output/templates/ \
  --verbose

Category B: Neurons with Existing Full Meshes (OBJ)

What they are: Neurons (especially from FlyWire, FAFB, and newer datasets) that already have proper triangulated OBJ meshes (volume_man.obj) generated by trimesh. These have vertices AND faces.

How to identify: The volume_man.obj file contains both v (vertex) and f (face) lines, and was typically generated by trimesh or similar tools.

Example:

  • FlyWire neuron VFB_fw121683volume_man.obj has ~61K lines with full triangle mesh
  • FlyEM-HB neuron VFB_jrchjx4q on JRC2018U — volume_man.obj has ~22K lines with full mesh

Conversion approach:

  1. Download the OBJ mesh file
  2. Parse vertices and faces
  3. Convert directly to precomputed mesh format (Neuroglancer legacy mesh or multi-resolution mesh)
  4. No marching cubes needed — this is the fastest conversion path

Script: convert_obj_meshes.py

python convert_obj_meshes.py \
  --input-obj volume_man.obj \
  --output-dir output/meshes/ \
  --vfb-id VFB_fw121683 \
  --template-id VFB_00101567 \
  --verbose

Category C: SWC Skeletons Without Full Meshes → Generate Mesh from SWC

What they are: Neurons that have SWC skeleton tracings but either:

  • Only have a point-cloud OBJ (volume.obj with vertices only, no faces), or
  • Have no mesh at all

How to identify: The volume.obj is a point cloud (auto-generated by VFB) — it has v lines with 4 values (x, y, z, weight) but zero f (face) lines. The SWC file provides the skeleton with node positions and radii.

SWC format recap:

# PointNo Label X Y Z Radius Parent
1 0 227.18 70.34 59.46 0.07 -1
2 0 227.29 70.41 59.51 0.12 1

Each node has a 3D position and radius. Parent relationships define the tree structure.

Conversion approach:

  1. Parse the SWC file to get the skeleton tree (nodes, positions, radii, connectivity)
  2. Generate a mesh by "inflating" the skeleton:
    • For each edge (parent→child), create a truncated cone (frustum) connecting the two nodes
    • Use the node radii to set the cone diameters at each end
    • At branch points, create sphere caps to smooth the junctions
  3. Merge all tube segments into a single watertight mesh
  4. Convert to precomputed mesh format

Libraries that can do this:

  • navisnavis.conversion.tree2meshneuron() converts SWC skeletons to meshes using CGAL or convex hulls
  • trimesh — Can create and merge tube/cylinder primitives
  • Custom implementation — Generate tube segments directly as triangle meshes

Script: convert_swc_to_mesh.py

python convert_swc_to_mesh.py \
  --input-swc volume.swc \
  --output-dir output/meshes/ \
  --vfb-id VFB_jrchjx4q \
  --tube-sides 8 \
  --verbose

Category D: Everything Else — Generate Mesh from NRRD Volume

What they are: Images that don't have usable meshes and aren't skeletons — primarily:

  • Expression patterns (confocal images)
  • Individual painted domain NRRDs
  • Any neuron that only has volumetric data

Conversion approach:

  1. Download the NRRD volume
  2. Threshold/binarize if needed (for confocal data)
  3. Convert NRRD → precomputed segmentation format
  4. Run marching cubes to generate a mesh from the volume
  5. Optionally remove dust (small disconnected components)

This is essentially the approach from MetaCell/virtual-fly-brain PR #207.

Script: convert_nrrd.py

python convert_nrrd.py \
  --input-nrrd volume.nrrd \
  --output-dir output/volumes/ \
  --dust-threshold 100 \
  --verbose

Decision Flow

For each VFB image:
  │
  ├─ Is it a Template with painted domains?
  │   └─ YES → Category A: convert_templates.py
  │       (NRRD label field → precomputed segmentation + meshes per region)
  │
  ├─ Does it have a volume_man.obj with faces?
  │   └─ YES → Category B: convert_obj_meshes.py
  │       (OBJ → precomputed mesh directly)
  │
  ├─ Does it have a volume.swc AND no full mesh?
  │   └─ YES → Category C: convert_swc_to_mesh.py
  │       (SWC skeleton → inflated tube mesh → precomputed mesh)
  │
  └─ Otherwise (only has NRRD volume)
      └─ Category D: convert_nrrd.py
          (NRRD → marching cubes → precomputed mesh)

Output: Neuroglancer Precomputed Format

All conversion paths produce the same output structure:

output/{vfb_id}/
  ├── info                  # JSON metadata (type, data_type, scales, resolution)
  ├── mesh/
  │   ├── info              # Mesh metadata (@type: neuroglancer_legacy_mesh)
  │   ├── {seg_id}          # Binary mesh data per segment
  │   └── {seg_id}:0        # Fragment manifest
  ├── segment_properties/
  │   └── info              # Segment labels and descriptions
  └── 0/                    # Scale level 0 volume chunks (for segmentation types)
      └── 0-64_0-64_0-64    # Chunk files

Requirements

pip install -r requirements.txt

Templates Available

Template VFB ID Voxel Size Dimensions
JRC2018Unisex VFB_00101567 0.519 × 0.519 × 1.0 µm 1211 × 567 × 175
JFRC2 VFB_00017894 0.622 µm isotropic 1024 × 512 × 218
JRC FlyEM Hemibrain VFB_00101384 varies varies
Court2018 VNS VFB_00100000 0.461 × 0.461 × 0.7 µm varies
L3 CNS Wood2018 VFB_00049000 0.29 × 0.29 × 0.5 µm varies
Ito2014 VFB_00030786 varies varies

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages