To ChatGPT: Brain implants in visual cortex such as Neuralink Blindsight cannot directly convey visual textures, shading and smooth surfaces, because simultaneously activating many electrodes above phosphene threshold would cause seizures. Any solutions? https://chatgpt.com/share/68469685-71c8-8004-a19e-24466ac7ff7c ChatGPT 5: You're correct that with current electrode-based visual cortex implants (including Neuralink’s Blindsight concept), high-density stimulation to represent fine visual textures or smooth gradients runs into two major barriers: 1. **Seizure risk** — broad simultaneous activation of many electrodes over threshold increases cortical hyperexcitability and risk of epileptiform activity. 2. **Perceptual limitations** — even without seizures, concurrent high-amplitude stimulation tends to blur or merge percepts instead of creating continuous shading or texture. That means “painting” a whole patch of cortex all at once is both unsafe and perceptually crude. Still, there are some potential workarounds researchers have explored or theorized: --- ### **1. Temporal multiplexing (“phosphene scanning”)** * **Principle:** Instead of lighting up all relevant electrodes at once, stimulate subsets in rapid sequence. * **Goal:** The brain integrates these into a smooth percept through persistence of vision, like a raster display. * **Caveats:** * Requires extremely fast switching (<10 ms per frame element) to avoid flicker. * Might reduce seizure risk, but still limited by total stimulation duty cycle and current spread. * May not fully capture fine shading without long dwell times, which slow update rate. --- ### **2. Subthreshold population coding** * **Principle:** Stimulate many electrodes *below* phosphene threshold so no single electrode elicits a distinct flash, but the combined pattern biases cortical activity toward a perceptual gradient. * **Goal:** Convey shading and texture without discrete phosphene dots. * **Caveats:** * Requires very precise calibration and closed-loop decoding of cortical responses. * Very little human perceptual data exists for subthreshold multi-electrode coding in visual cortex. --- ### **3. Current steering and virtual electrodes** * **Principle:** Deliver carefully phased, low-current pulses to neighboring electrodes to shift the locus of stimulation between them (“virtual sites”). * **Goal:** Smooth interpolation between phosphenes, potentially mimicking texture edges and gradients. * **Caveats:** * Works best in retina/optic nerve; cortical receptive fields are less regular and may distort results. * Might still trigger seizures if net charge in a cortical patch is too high. --- ### **4. Patterned microstimulation with stochastic resonance** * **Principle:** Inject low-level random noise stimulation to cortical networks along with a weak signal pattern. The noise can help neurons detect otherwise subthreshold signals. * **Goal:** Enhance perception of gradients without strong simultaneous supra-threshold activation. * **Caveats:** * Needs fine tuning to avoid noise masking the pattern. * Clinical seizure risk from noise modulation is largely untested. --- ### **5. Hybrid encoding with sensory substitution** * **Principle:** Let the implant convey sparse, safe-to-stimulate anchor points (edges, salient points), while texture, shading, and smoothness are provided by an *external modality* like auditory or tactile encoding (e.g., The vOICe). * **Goal:** Reduce cortical stimulation load while still giving rich perception via multisensory integration. * **Caveats:** * Requires training to fuse multisensory cues into a unified percept. --- ### **6. Ultra-focal stimulation using non-electrode adjuncts** * **Examples:** * *Optogenetics* — light-sensitive cortical neurons allow high-resolution stimulation with less spread. * *Focused ultrasound* — targeted modulation without simultaneous large-area electrical activation. * **Goal:** Sharper, safer activation patterns. * **Caveats:** These are still experimental in humans and require genetic or novel hardware interventions. --- If I had to bet on **near-term** viability, a combination of **fast temporal multiplexing** + **subthreshold population coding** + **hybrid sensory substitution** is the most plausible workaround. If you want, I can sketch a diagram showing how these methods could be layered in a Blindsight-like system to convey texture without provoking seizures.