To ChatGPT: Given everything that is known in neuroscience about receptive fields in visual cortex, refractory period plus perceptual fading upon repeated stimulation, what is the expected effective resolution and frame rate for late-blind recipients of a Neuralink Blindsight brain implant with tens of thousands of electrodes? https://chatgpt.com/share/689f4b3d-adc8-8004-a78a-9cbc1ce74542 ChatGPT 5: Short answer: even with 'tens of thousands' of electrodes in V1, you shouldn't expect anything like video-rate, pixel-like vision. A realistic, evidence-based expectation for late-blind users is on the order of a few **hundred** independently useful 'picture elements' over a small central field and an **effective refresh** in the **~1-5 Hz** range for coherent forms—higher only for very sparse, transient cues—before adaptation/fading and safety constraints bite. Here's why (compact, but with receipts): * #1 Spatial resolution is limited by V1 biology, not the raw electrode count - **Cortical magnification & receptive fields.** In central V1, millimeters of cortex map to fractions of a degree, and the 'point image' (the patch of cortex effectively activated by a point stimulus) is roughly millimeter-scale—growing rapidly with eccentricity. Packing electrodes denser than about that doesn't buy you independent phosphenes; current spreads and receptive fields overlap. So, over the central few degrees you're realistically in the **hundreds** of separable sites, not tens of thousands. ([PMC][1], [ScienceDirect][2], [jov.arvojournals.org][3]) - **Human V1 surface area is finite.** Typical V1 is ~800-1,800 mm² per hemisphere (with large individual variance), setting an upper bound on how many ~mm-scale, independently addressable loci you can recruit. ([PMC][4], [Nature][5]) - **Empirics with multi-electrode V1 stimulation.** Human and NHP work shows that simultaneously lit 'dots' don't sum like pixels; interactions, crowding and imprecision dominate. Dynamic/sequential strategies work better than static 'pixel arrays,' but that trades spatial density for time (see #3). ([PMC][6], [Science][7]) * #2 You can't safely drive 'all the channels' hard and fast - **Charge density & duty cycle.** Safe intracortical stimulation is bounded by per-phase charge and charge-density limits; practical protocols use short biphasic pulses with low duty cycles, capping how many sites you can drive per unit time without exceeding tissue safety margins. ([PMC][8]) - **Localization is imperfect.** Even with carefully titrated currents, blind participants have difficulty reliably localizing many phosphenes at once, which effectively reduces usable 'resolution.' ([ScienceDirect][9]) * #3 Temporal resolution is throttled by adaptation and by the need to 'draw' - **Perceptual fading/adaptation.** Repeated cortical stimulation leads to rapid decreases in perceived brightness/intensity (phosphene fading); higher pulse rates accelerate adaptation, forcing rest periods or lower rates—both of which reduce frame rate. (The clearest human adaptation data are in S1 ICMS, but the same pattern—stronger, faster fading at higher rates—has been observed with visual phosphene reports and general cortical stimulation.) ([microprobes.com][10], [ScienceDirect][11]) - **No simple flicker-fusion ‘video rate'.** Classic visual-cortex literature notes that phosphenes under electrical drive don't show a sharp flicker-fusion boundary like normal vision; they remain 'stroboscopic' unless rates are pushed high, which then worsens adaptation and safety margins. ([ResearchGate][12]) - **Dynamic (sequential) stimulation that actually yields shapes is slow.** The most convincing human demonstrations 'trace' letters by stimulating electrodes in sequence. Those studies report recognition of 'up to **86 shapes per minute**' (~1.4 Hz) in blind participants—great for symbols, but far from video. Scaling from letters to scenes only increases the number of sequential steps and thus **lowers** the effective refresh for a whole image. ([PubMed][13]) Putting it together—an order-of-magnitude, late-blind, V1-targeted expectation: * **Usable spatial resolution:** - Central ~5-10°: **~200-800** effective, separable elements (best case, well-mapped, moderate current, late-blind cortex), degrading quickly with eccentricity. Expect distortions and crowding rather than a neat grid. ([PMC][1]) - Whole useful field: still on the order of **hundreds to low-thousands** of distinguishable 'positions,' not tens of thousands. * **Usable frame rate (for coherent forms/scenes):** - **~1-5 Hz** for patterns that must be drawn/sequenced (letters/contours), depending on path length and electrode count per pattern. Sparse beacons (few, brief phosphenes) can refresh faster; dense patterns slower. Pushing higher rates triggers faster fading and hits charge/duty limits; continuous **10-30 Hz ‘video'** is not supported by current human data. ([PubMed][13], [PMC][8], [ScienceDirect][11]) Caveats that can swing these numbers down (not up): inter-subject V1 variability; electrode placement vs. retinotopy; representational drift requiring frequent remapping; and seizure risk considerations that further cap simultaneous activation. None of these trends improve effective resolution or frame rate relative to the bounds above. ([PMC][4], [Nature][14]) If you want, I can turn these constraints into a simple quantitative 'budget calculator' (inputs: field size, electrode pitch, per-site dwell time, duty cycle) to show how quickly the pixel and frame-rate numbers collapse once you stay inside safe charge and adaptation limits. [1]: https://pmc.ncbi.nlm.nih.gov/articles/PMC10089656/?utm_source=chatgpt.com "Cortical magnification eliminates differences in contrast ..." [2]: https://www.sciencedirect.com/science/article/pii/S0896627303002654?utm_source=chatgpt.com "Cortical Magnification within Human Primary Visual Cortex ..." [3]: https://jov.arvojournals.org/article.aspx?articleid=2192509&utm_source=chatgpt.com "Visual field representations and locations of visual areas V1/2 ..." [4]: https://pmc.ncbi.nlm.nih.gov/articles/PMC9192713/?utm_source=chatgpt.com "Linking individual differences in human primary visual ..." [5]: https://www.nature.com/articles/s41467-022-31041-9?utm_source=chatgpt.com "Linking individual differences in human primary visual ..." [6]: https://pmc.ncbi.nlm.nih.gov/articles/PMC9831085/?utm_source=chatgpt.com "Percepts evoked by multi-electrode stimulation of human ..." [7]: https://www.science.org/doi/10.1126/science.abd7435?utm_source=chatgpt.com "Shape perception via a high-channel-count ..." [8]: https://pmc.ncbi.nlm.nih.gov/articles/PMC5386002/?utm_source=chatgpt.com "Tissue damage thresholds during therapeutic electrical ..." [9]: https://www.sciencedirect.com/science/article/pii/S1935861X21002266?utm_source=chatgpt.com "Multi-electrode stimulation evokes consistent spatial ..." [10]: https://microprobes.com/files/pdf/publications/by-product/custom/feasibility_of_visual_prosthesis.pdf?utm_source=chatgpt.com "Feasibility of a visual prosthesis for the blind based on ..." [11]: https://www.sciencedirect.com/science/article/pii/S1935861X22001000?utm_source=chatgpt.com "Effects of stimulus pulse rate on somatosensory adaptation ..." [12]: https://www.researchgate.net/publication/20779099_Visual_sensations_produced_by_intracortical_microstimulation_of_the_human_occipital_cortex?utm_source=chatgpt.com "(PDF) Visual sensations produced by intracortical ..." [13]: https://pubmed.ncbi.nlm.nih.gov/32413298/ "Dynamic Stimulation of Visual Cortex Produces Form Vision in Sighted and Blind Humans - PubMed" [14]: https://www.nature.com/articles/s41467-021-25436-3?utm_source=chatgpt.com "Stimulus-dependent representational drift in primary visual ..."