Hello Community and Google Engineers,
I am a member of the Google Developer Program working on a specialized computer vision project that requires absolute colorimetric accuracy (scientific measurement) rather than the standard “perceptually pleasing” computational photography provided by the stock ISP.
Based on my research, I understand that the standard ISP pipeline (Tone Mapping, AWB, Color Space Transforms) is proprietary and cannot be modified by external developers to support non-standard rendering intents.
Therefore, I am exploring a “Parallel Pipeline” architecture on Pixel devices (specifically Pixel 7/8/9 Pro) and need to confirm the technical feasibility of the following workflow before committing resources:
The Proposed Workflow:
* Capture: Acquire RAW_SENSOR data via the Camera2 API (aiming for linear, scene-referred data).
* Process: Pass this RAW data directly to a custom TensorFlow Lite (TFLite) model.
* Compute: The model performs a learned “spectral reconstruction” or “complex color space transformation” (mapping RAW RGB to absolute CIELAB).
* Hardware Acceleration: Execute this model on the Pixel TPU (using the NNAPI delegate or the new Google AI Edge SDK).
My Critical Questions (The “Blockers”):
* TPU Access for Regression: Does the Pixel TPU delegate currently support high-precision floating-point operations required for regression tasks (outputting precise coordinate values like Lab*), or is it strictly optimized for quantized classification/detection tasks?
* RAW Data Integrity: When capturing RAW_SENSOR on Pixel, does the firmware apply any irreversible “baking” (like local tone mapping or spatial gain maps) before the data reaches the API, which would render scientific colorimetry impossible?
* Throughput: Is it realistic for an external developer to achieve near real-time performance for high-resolution RAW processing on the TPU, or is the required bandwidth/memory access privileged to first-party Google services (like Magic Eraser/Real Tone)?
Context:
My goal is to treat the Pixel device as a colorimeter. If the hardware abstraction layer prevents raw linear access or restricts TPU usage for this type of custom signal processing, I will need to pivot my hardware strategy.
Any insights from the TensorFlow Lite or Pixel Camera engineering teams would be invaluable.
Thank you.