<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en"><generator uri="https://jekyllrb.com/" version="4.4.1">Jekyll</generator><link href="https://hyunseolee43.github.io/feed.xml" rel="self" type="application/atom+xml"/><link href="https://hyunseolee43.github.io/" rel="alternate" type="text/html" hreflang="en"/><updated>2026-05-02T03:24:37+00:00</updated><id>https://hyunseolee43.github.io/feed.xml</id><title type="html">blank</title><subtitle>Hyun Seo (Emily) Lee — biomedical engineering PhD student at Johns Hopkins University, working on smartphone-based nailfold capillaroscopy and non-invasive hemoglobin estimation in Dr. Nicholas Durr&apos;s Computational Biophotonics Lab. </subtitle><entry><title type="html">Monte Carlo Simulation for Light Transport in Tissue</title><link href="https://hyunseolee43.github.io/blog/2026/monte-carlo-light-transport-in-tissue/" rel="alternate" type="text/html" title="Monte Carlo Simulation for Light Transport in Tissue"/><published>2026-03-26T12:00:00+00:00</published><updated>2026-03-26T12:00:00+00:00</updated><id>https://hyunseolee43.github.io/blog/2026/monte-carlo-light-transport-in-tissue</id><content type="html" xml:base="https://hyunseolee43.github.io/blog/2026/monte-carlo-light-transport-in-tissue/"><![CDATA[<p>I gave a journal club talk this spring on <strong>Monte Carlo simulation for modeling light transport in tissue</strong>, more specificially on why simple analytical models break when we try to use them on a finger, what Monte Carlo actually does under the hood, and how it lets us simulate the photon paths through skin and capillaries that we care about for nailfold imaging.</p> <h2 id="introduction">Introduction</h2> <p>I work on a smartphone-based <a href="https://durr.jhu.edu">nailfold capillaroscopy</a> system for non-invasive hemoglobin estimation. The setup is pretty intuitive: a green micro-LED back-illuminates the capillary loops at the base of the fingernail, and we use the <strong>Beer–Lambert law</strong> to relate the transmitted intensity to hemoglobin concentration along the optical path:</p> \[I = I_0 \, e^{-\mu_a L}\] <p>This works beautifully in a clean cuvette of hemoglobin solution. The problem is that a finger is not a cuvette. It’s a <strong>multi-layered, highly scattering, heterogeneous medium</strong> — epidermis on top, vascularized dermis below, bone underneath, capillary loops looping through the dermis, sebum and curvature at every interface. Photons don’t travel in straight lines through any of it.</p> <p>Beer–Lambert silently assumes:</p> <ol> <li>A homogeneous medium with one absorption coefficient.</li> <li>A straight-line path of fixed length $L$.</li> <li>No scattering — every non-absorbed photon arrives at the detector along the same trajectory.</li> </ol> <p>In real tissue, none of these are true. Photons scatter many times before being absorbed or detected; the effective path length is longer than the geometric thickness; and pressing harder on the finger redistributes capillaries and changes that path length dynamically. So the question I needed an answer to was: <strong>if I can’t write down a closed-form expression for where the light goes, can I simulate it instead?</strong></p> <h2 id="optical-properties-recap">Optical Properties Recap</h2> <p>Every layer of tissue has four numbers that govern its photon behavior:</p> <ul> <li><strong>Refractive index</strong> $n$ — bending at interfaces (Fresnel reflection).</li> <li><strong>Absorption coefficient</strong> $\mu_a$ — probability of an absorption event per unit length, dominated by melanin and hemoglobin in the visible range.</li> <li><strong>Scattering coefficient</strong> $\mu_s$ — probability of a scattering event per unit length.</li> <li><strong>Anisotropy factor</strong> $g$ — average $\cos\theta$ of the scattering angle. Biological tissue is highly forward-scattering, so $g \approx 0.9$.</li> </ul> <p>These are wavelength-dependent, which is why the device design (660 nm, 940 nm, green, etc.) interacts strongly with which depths and which chromophores we can actually see.</p> <h2 id="diffusion-theory">Diffusion Theory</h2> <p>The textbook way to model bulk light transport is <strong>diffusion theory</strong>, which treats photons like a concentration that diffuses down a gradient:</p> \[\frac{\partial C}{\partial t} = D\nabla^2 C - \mu_a \, c\, C + S\] <p>It’s fast, it’s analytical, and it’s a perfectly fine approximation when scattering dominates absorption ($\mu_s’ \gg \mu_a$) and you’re far from sources and boundaries. But it falls apart exactly where I need accuracy:</p> <ul> <li><strong>Near boundaries</strong> — like the skin surface or a vessel wall, where photon flux has sharp curvature.</li> <li><strong>Near localized absorbers</strong> — like a dense hemoglobin-filled capillary loop sitting in a less-absorbing dermis.</li> <li><strong>Near the source</strong> — like the LED illumination point itself, before the photons have had time to randomize their directions.</li> </ul> <p>Every interesting feature of the nailfold problem lives in one of those failure modes. So we need a method that doesn’t make the diffusion assumption.</p> <h2 id="montec-carlo">Montec Carlo</h2> <p>Monte Carlo simulation, <a href="https://en.wikipedia.org/wiki/Monte_Carlo_method">first developed by Stanislaw Ulam</a> in the 1940s, solves problems by <strong>repeated random sampling</strong>. Instead of solving a global differential equation, you simulate the random walk of a single photon, then do that ten million times, and the macroscopic distribution of where photons end up becomes a numerical approximation to the true light field.</p> <p>The intuition: the rules a single photon follows (step size, scattering angle, absorption probability) are all just probability distributions. If we can sample from those distributions with a uniform random number generator, we can simulate one photon. And by the law of large numbers, if we simulate enough of them, the aggregate behavior converges to the right answer.</p> <p>The whole simulation reduces to a loop: <strong>Launch → Hop → Drop → Spin → Terminate</strong>, repeated until $N$ photons have been propagated.</p> <h2 id="the-five-step-loop">The five-step loop</h2> <h3 id="1-launch">1. Launch</h3> <p>Each photon is initialized at a position $(x, y, z)$ — typically the source location — with a trajectory specified by directional cosines $(u_x, u_y, u_z)$:</p> \[u_x = \sin\theta\cos\varphi, \quad u_y = \sin\theta\sin\varphi, \quad u_z = \cos\theta\] <p>It also gets an initial weight $W = 1.0$ representing its energy. As the photon undergoes absorption, that weight will decay; we track it spatially in a 3D grid $A[i_z][i_r]$ to build up an absorption map.</p> <h3 id="2-hop--sampling-step-size">2. Hop — sampling step size</h3> <p>A photon travels some random distance $s$ before its next interaction (either scattering or absorption). The total attenuation coefficient is $\mu_t = \mu_a + \mu_s$, and step size is <strong>exponentially distributed</strong>:</p> \[p(s) = \mu_t \, e^{-\mu_t s}\] <p>To sample $s$ from this distribution using a uniform random number $\xi \in [0,1]$, we use <strong>inverse-CDF sampling</strong>. The CDF is $F(s) = 1 - e^{-\mu_t s}$, and setting $F(s) = \xi$ and solving:</p> \[s = -\frac{\ln(\xi)}{\mu_t}\] <p>(The trick that $1 - \xi$ is statistically equivalent to $\xi$ for a uniform on $[0,1]$ is what lets us drop the “1 −” inside the log.)</p> <p>After computing $s$, we check whether the new position would carry the photon across a tissue boundary. If it would cross out into air, we take a partial step to the boundary, evaluate <strong>Fresnel reflectance</strong> $R$ at that interface, and either reflect the photon back into the tissue or let it escape (and record it in the reflectance map).</p> <h3 id="3-drop--depositing-absorbed-weight">3. Drop — depositing absorbed weight</h3> <p>When a photon reaches its interaction site, a fraction of its weight is deposited locally as absorbed energy:</p> \[\Delta W = W \cdot \frac{\mu_a}{\mu_a + \mu_s}, \qquad W \leftarrow W - \Delta W\] <p>The deposited weight $\Delta W$ is added to the spatial grid $A[i_z][i_r]$, which is what eventually becomes the <strong>internal absorption map</strong> — the heat map of where the photon energy ended up.</p> <h3 id="4-spin--sampling-the-scattering-angle">4. Spin — sampling the scattering angle</h3> <p>After absorption, the photon scatters into a new direction. The deflection angle $\theta$ is sampled from the <strong>Henyey–Greenstein phase function</strong>:</p> \[p(\cos\theta) = \frac{1 - g^2}{2 \, (1 + g^2 - 2g\cos\theta)^{3/2}}\] <p>Two sanity checks for this function: integrating over all angles gives 1 (probability is conserved — the photon goes <em>somewhere</em>), and the expectation value $\langle\cos\theta\rangle$ is exactly $g$. So if you set $g = 0$ you get isotropic scattering; at $g = 0.9$ you get strongly forward-peaked scattering, which is what tissue actually does.</p> <p>Inverse-CDF sampling on this distribution gives a closed-form expression for $\cos\theta$ in terms of a uniform $\xi$:</p> \[\cos\theta = \frac{1}{2g}\left[ 1 + g^2 - \left(\frac{1 - g^2}{1 - g + 2g\xi}\right)^2 \right]\] <p>The azimuthal angle $\psi$ is sampled uniformly on $[0, 2\pi]$ since tissue is locally isotropic in azimuth. We then apply a <strong>3D rotation matrix</strong> to update the directional cosines $(u_x, u_y, u_z)$, which sets up the photon’s next Hop.</p> <h3 id="5-terminate">5. Terminate</h3> <p>Photons that have lost most of their weight aren’t worth simulating further. But naively killing low-weight photons biases the energy budget. The standard solution is <strong>Russian roulette</strong>: once the weight drops below a threshold (e.g., $W_{th} = 10^{-4}$), give the photon a 1-in-$m$ chance of survival. If it survives, multiply its weight by $m$ and let it keep going; if it doesn’t, set $W = 0$ and kill it. The expectation value of the photon’s contribution is preserved, so the simulation stays unbiased.</p> <p>The total number of interaction steps a photon goes through scales roughly as $N_{\text{steps}} \sim \ln(1/W_{th}) / (\mu_a / \mu_t)$, which means <strong>as $\mu_a$ gets smaller (less absorbing tissue), you spend a lot more computation per photon</strong>. Visible-light skin simulations are tractable; deep-NIR simulations get expensive.</p> <h2 id="two-flavors-of-monte-carlo-i-care-about">Two flavors of Monte Carlo I care about</h2> <h3 id="mcml--multi-layered-media">MCML — multi-layered media</h3> <p><a href="https://omlc.org/software/mc/">MCML (Wang, Jacques &amp; Zheng 1995)</a> treats tissue as a stack of <strong>infinite, parallel slabs</strong> — epidermis on top, dermis below, hypodermis, etc. — each with its own $(\mu_a, \mu_s, g, n)$. It uses cylindrical coordinates because the geometry is rotationally symmetric around the source. It’s the gold standard for layered skin optics and gives you reflectance and transmittance curves you can compare directly to spectroscopy data.</p> <p>The <strong>limitation</strong> for capillaroscopy: I can’t represent a capillary loop in MCML. It would smear the entire vascular bed into a flat infinite sheet of blood, which is exactly the spatial information I’m trying to recover.</p> <h3 id="mcxyz--voxel-based-3d-heterogeneous-media">mcxyz — voxel-based 3D heterogeneous media</h3> <p><a href="https://omlc.org/software/mc/mcxyz/"><code class="language-plaintext highlighter-rouge">mcxyz.c</code> (Jacques 2013)</a> uses the same hop–drop–spin core but represents the medium as a <strong>3D voxel grid</strong>, typically $400\times400\times400$ at micron resolution. Each voxel carries an integer ID pointing to a tissue type with its own optical properties.</p> <p>This is the version I actually need. I can place a real epidermis layer on top, populate the dermis with discrete capillary loops at varying depths and orientations, and ask questions like:</p> <ul> <li><em>How does illumination angle affect contrast?</em> (the slides showed 20° angled vs. vertical — they look very different)</li> <li><em>How deep can I see capillaries before contrast collapses?</em></li> <li><em>What does pressing on the finger (which displaces capillaries) do to my measured intensity?</em></li> </ul> <h2 id="connecting-back-to-the-project">Connecting Back to the Project</h2> <p>My near-term plan is to use mcxyz to build <strong>lookup tables</strong> mapping observed image contrast and apparent vessel width to the underlying capillary depth and hemoglobin concentration. Right now my hemoglobin estimation pipeline assumes a Beer–Lambert path through homogeneous tissue; what I want is a forward model that says “if the capillary is at depth $d$ with hemoglobin $C$, the smartphone will see this intensity” — and then invert that. Monte Carlo gives me the forward model.</p> <p>A few things I came away from this talk wanting to do:</p> <ol> <li>Validate my mcxyz simulations against MCML for a homogeneous-slab sanity check.</li> <li>Build the capillary geometry parametrically (depth, diameter, orientation, density) so I can sweep the design space.</li> <li>Extend to varying skin pigmentation — which connects directly back to the <a href="/blog/2025/skin-tone-optics-and-quantification-methods/">skin tone optics talk</a> I gave last fall, since melanin in the epidermis attenuates illumination before it reaches the capillary bed at all.</li> </ol> <h2 id="take-aways">Take aways:</h2> <ul> <li><strong>Beer–Lambert is a clean rule that lies in turbid media.</strong> The “path length” is a distribution, not a constant.</li> <li><strong>Diffusion theory is fast and wrong in exactly the regimes I care about</strong> — boundaries, sources, and localized absorbers.</li> <li><strong>Monte Carlo is just hop–drop–spin, repeated a few million times.</strong> Each step is a closed-form sample from a known distribution. The hard part is computational, not conceptual.</li> <li><strong>MCML is for spectroscopy, mcxyz is for imaging.</strong> If your question depends on geometry, you need voxels.</li> <li><strong>Forward models unlock inverse problems.</strong> Once I can simulate the photon paths, I can build calibration tables that turn raw camera intensities into clinically meaningful hemoglobin estimates — which is the whole point of the project.</li> </ul> <h2 id="references">References</h2> <ul> <li>Wang, L., Jacques, S. L., &amp; Zheng, L. (1995). MCML — Monte Carlo modeling of light transport in multi-layered tissues. <em>Computer Methods and Programs in Biomedicine</em>, 47(2), 131–146. <a href="https://doi.org/10.1016/0169-2607(95)01640-F">DOI:10.1016/0169-2607(95)01640-F</a></li> <li>Jacques, S. L. (2013). Coupling 3D Monte Carlo light transport in optically heterogeneous tissues to photoacoustic signal generation. <em>Photoacoustics</em>, 1(2), 137–142. <a href="https://doi.org/10.1016/j.pacs.2014.06.001">DOI:10.1016/j.pacs.2014.06.001</a></li> <li>Jacques, S. L. (2013). Optical properties of biological tissues: a review. <em>Physics in Medicine and Biology</em>, 58(11), R37. <a href="https://doi.org/10.1088/0031-9155/58/11/R37">DOI:10.1088/0031-9155/58/11/R37</a></li> <li>Henyey, L. G., &amp; Greenstein, J. L. (1941). Diffuse radiation in the galaxy. <em>Astrophysical Journal</em>, 93, 70–83. <a href="https://doi.org/10.1086/144246">DOI:10.1086/144246</a></li> <li>Bajrami, D., Spano, F., Wei, K., Bonmarin, M., &amp; Rossi, R. M. (2025). Human skin models in biophotonics: materials, methods, and applications. <em>Advanced Healthcare Materials</em>, 14(24), e2501894. <a href="https://doi.org/10.1002/adhm.202501894">DOI:10.1002/adhm.202501894</a></li> </ul>]]></content><author><name></name></author><category term="notes"/><category term="optics"/><category term="biophotonics"/><category term="monte-carlo"/><category term="simulation"/><category term="journal-club"/><summary type="html"><![CDATA[Journal club on why Beer–Lambert and diffusion theory fail in turbid tissue, and how Monte Carlo's hop–drop–spin loop gives us a much better picture of where photons actually go inside skin.]]></summary></entry><entry><title type="html">Skin Tone Optics and Quantification Methods</title><link href="https://hyunseolee43.github.io/blog/2025/skin-tone-optics-and-quantification-methods/" rel="alternate" type="text/html" title="Skin Tone Optics and Quantification Methods"/><published>2025-11-11T12:00:00+00:00</published><updated>2025-11-11T12:00:00+00:00</updated><id>https://hyunseolee43.github.io/blog/2025/skin-tone-optics-and-quantification-methods</id><content type="html" xml:base="https://hyunseolee43.github.io/blog/2025/skin-tone-optics-and-quantification-methods/"><![CDATA[<p>I gave a journal club talk on <strong>skin tone optics and quantification methods</strong>, more specifically how light interacts with skin, why that interaction is wavelength-dependent and pigmentation-dependent, and what tools we actually have to measure pigmentation objectively. This post is a written-up version of that talk for anyone who’d like the highlights without the slide deck.</p> <h2 id="why-this-matters">Why this matters</h2> <p>The motivating fact for the entire talk is a 2020 <a href="https://doi.org/10.1056/NEJMc2029240">retrospective study from the University of Michigan</a> showing a consistent <strong>~2% measurement bias</strong> in pulse oximetry between Black and white patients. The bias is small in absolute terms but matters clinically as it determines whether a patient is flagged as hypoxemic and whether they receive supplemental oxygen.</p> <p>The bias arises because pulse oximetry uses two wavelengths (~660 nm red and ~940 nm NIR) and assumes a fixed relationship between absorption ratios and arterial oxygen saturation. <strong>Melanin in the epidermis violates that assumption</strong>: it absorbs strongly at 660 nm and only weakly at 940 nm, so for darker-pigmented skin the red channel is disproportionately attenuated <em>before</em> the light ever reaches blood. The device can’t tell the difference between “less hemoglobin in the optical path” and “more melanin in the optical path.”</p> <div class="row justify-content-center"> <div class="col-md-8"> <figure> <picture> <source class="responsive-img-srcset" srcset="/assets/img/posts/skin-tone/hemoglobin-480.webp 480w,/assets/img/posts/skin-tone/hemoglobin-800.webp 800w,/assets/img/posts/skin-tone/hemoglobin-1400.webp 1400w," type="image/webp" sizes="95vw"/> <img src="/assets/img/posts/skin-tone/hemoglobin.png" class="img-fluid rounded z-depth-1" width="100%" height="auto" data-zoomable="" loading="eager" onerror="this.onerror=null; $('.responsive-img-srcset').remove();"/> </picture> <figcaption class="caption">Molar extinction coefficients of oxyhemoglobin (HbO₂) and deoxyhemoglobin (Hb), compiled by Scott Prahl from multiple sources. The vertical lines mark the 660 nm and 940 nm wavelengths used by pulse oximeters. Data: <a href="https://omlc.org/spectra/hemoglobin/">omlc.org/spectra/hemoglobin</a>.</figcaption> </figure> </div> </div> <p>This isn’t an obscure problem. I run into the same issue building a smartphone-based <a href="#">nailfold capillaroscopy</a> system for non-invasive hemoglobin estimation: I have to manually adjust exposure for darker-skinned subjects to keep capillary contrast within a usable range. The question that motivated the talk is: <strong>how do we measure skin pigmentation objectively, so that we can correct for it rather than ignore it?</strong></p> <h2 id="a-30-second-tour-of-skin">A 30-second tour of skin</h2> <div class="row align-items-center mt-3"> <div class="col-md-5 mb-3 mb-md-0"> <figure> <picture> <source class="responsive-img-srcset" srcset="/assets/img/posts/skin-tone/skin-layers-480.webp 480w,/assets/img/posts/skin-tone/skin-layers-800.webp 800w,/assets/img/posts/skin-tone/skin-layers-1400.webp 1400w," type="image/webp" sizes="95vw"/> <img src="/assets/img/posts/skin-tone/skin-layers.jpg" class="img-fluid rounded z-depth-1" width="100%" height="auto" data-zoomable="" loading="eager" onerror="this.onerror=null; $('.responsive-img-srcset').remove();"/> </picture> </figure> <div class="caption"> Cross-section of human skin showing the epidermis, dermis, and hypodermis. <em>Image: Don Bliss / National Cancer Institute (public domain).</em> </div> </div> <div class="col-md-7"> <p>Skin is a layered, turbid medium. From the surface down:</p> <ul> <li><strong>Epidermis</strong> (~60–120 µm): protective barrier; contains melanocytes that produce <strong>melanin</strong> in melanosomes. Melanin is the dominant absorber in the visible range and the source of skin’s color variation.</li> <li><strong>Dermis</strong> (~90% of total thickness): collagen-and-elastin matrix, vascularized — this is where <strong>hemoglobin</strong> lives, and where most of the optical path length accumulates.</li> <li><strong>Hypodermis</strong>: subcutaneous fat, larger nerves and vessels, and mostly invisible to short-wavelength visible light.</li> </ul> </div> Light hitting skin does three things, in roughly this order: reflects off the surface, absorbs as it travels, and scatters along the way. ### Reflection (~4–7% of incident light) The sebum layer of the epidermis has a refractive index of about 1.5; air is about 1.0. The Fresnel equations give roughly 4–7% specular reflection at this interface, which is small but makes skin look glossy and what creates the highlight problems in any handheld imaging system. ### Absorption Absorption is governed by the absorption coefficient $$\mu_a(\lambda)$$, and at visible wavelengths it's dominated by two chromophores: **Melanin** (epidermis). Its absorption falls roughly exponentially with wavelength — high in the UV/blue, much lower in the NIR. Eumelanin (brown/black) and pheomelanin (red/yellow) are produced in different ratios across pigmentations. Critically, **melanocyte concentration doesn't change much with skin tone** — what changes is melanosome size, number, and the resulting volume fraction of melanin in the epidermis. Lower-pigmented skin sits around 1–2% melanosome volume fraction; darker skin can reach 40%+. That's a 20× swing in epidermal absorption from the same anatomy. **Hemoglobin** (dermis). Both oxy- and deoxy-Hb have characteristic absorption peaks: a strong **Soret band around 400–420 nm** and **green–yellow bands around 540–600 nm**. At 660 nm, deoxy-Hb absorbs more than oxy-Hb (this is the foundation of pulse oximetry). At 940 nm, the relationship inverts. Putting these together explains the pulse-ox bias quantitatively: at 660 nm, the red light has to traverse epidermal melanin first, and this loss is much larger for higher-pigmented skin than for lower. At 940 nm, melanin barely matters. So the ratio that the device interprets as oxygen saturation is partly driven by epidermal melanin — a signal it has no way to separate. ### Scattering The remaining photons scatter on their way through tissue: - **Mie scattering** dominates in the dermis, driven by filamentous proteins like collagen (and keratin in the epidermis) when scatterer size is comparable to wavelength. - **Rayleigh scattering** dominates when scatterers are much smaller than wavelength. The balance of Mie and Rayleigh, combined with absorption, determines the **effective optical path** that a photon takes — and most of that path is in the dermis. ## Measurement technique 1: Diffuse Reflectance Spectroscopy (DRS) The cleanest way to characterize skin optics in the lab is diffuse reflectance spectroscopy. The setup is a broadband light source (halogen, xenon, or white LED), a fiber-optic probe with one delivery and one collection fiber, and a spectrometer. You compute a calibrated, dimensionless diffuse reflectance: $$ R_d(\lambda) = \frac{I_{\text{skin}}(\lambda) - I_{\text{background}}(\lambda)}{I_{\text{standard}}(\lambda) - I_{\text{background}}(\lambda)} $$ where $$I_{\text{standard}}$$ is measured against a known reflectance standard. From $$R_d(\lambda)$$ you invert to recover $$\mu_a$$ and $$\mu_s'$$ — typically using lookup tables built from Monte Carlo simulations of the specific probe geometry, since closed-form analytical models don't handle layered turbid media well. DRS is **the reference technique** — accurate, well-characterized, but not exactly portable. Probes are handheld (the Konica Minolta CM700d is a common form factor), but you still need a spectrometer and a controlled measurement environment. ## Measurement technique 2: Colorimetry and the Individual Typology Angle (ITA) Colorimeters trade spectral resolution for simplicity. They measure tristimulus values through broad-band filters, then convert to **CIELAB color space** under a standard illuminant (typically D65) and observer (10°). CIELAB encodes color as: - $$L^*$$ — lightness, 0 (black) to 100 (white) - $$a^*$$ — green (–) to red (+) - $$b^*$$ — blue (–) to yellow (+) The widely used summary metric is the **Individual Typology Angle**: $$ \mathrm{ITA} = \frac{180}{\pi}\arctan\!\left(\frac{L^* - 50}{b^*}\right) $$ ITA is the angle subtended from the reference point $$(L^*=50, b^*=0)$$ to the measured $$(L^*, b^*)$$ in the $$L^*$$–$$b^*$$ plane. Empirically, skin tones cluster along a banana-shaped curve in this plane — lighter tones in the upper-right region, darker tones lower-left. As melanin content increases, ITA decreases. ### The catch: ITA is blood-confounded This was the most interesting paper in the talk for me. Harunani et al. (SPIE 2025) used a [three-layer skin model](https://doi.org/10.1117/12.3044143) — epidermis, dermis, background — driven by published absorption and scattering spectra and inverted via the adding-doubling method. They asked a clean causal question: hold one chromophore constant, sweep the other, and see where the resulting reflectance lands in $$L^*$$–$$b^*$$. Two findings that change how you should think about ITA: 1. **Iso-melanin curves form the empirical banana.** Holding melanin fixed and sweeping blood volume fraction (0.2%–7%) traces a smooth curve in $$L^*$$–$$b^*$$. Each melanin level produces its own non-overlapping curve. Stack the curves together and you reproduce the empirically observed banana — with low-melanin skin at the top and high-melanin skin at the bottom. 2. **Blood volume slides points along those curves.** Vascular maneuvers (occlusion, congestion) change ITA without changing melanin at all. So ITA is a useful *summary*, but it's not a clean estimate of melanin specifically. Layer thickness matters too — but asymmetrically. Epidermal thickness (60→120 µm) shifts the whole distribution noticeably, while dermis thickness (1→2 mm) has a much smaller effect. Epidermal structure dominates color variation. The practical implication: if you actually want to infer chromophore concentrations from color, **use the iso-melanin / iso-blood family of curves rather than ITA alone**. ITA is a coarse one-dimensional projection of an inherently two-dimensional problem. ## Measurement technique 3: Smartphone colorimetry Burrow et al. (Biophoton. Discovery 2025) showed you can [approximate a professional colorimeter with an iPhone 11](https://doi.org/10.1117/1.BIOS.2.3.032504), if you control the conditions carefully. Their pipeline (the "SITA" algorithm): 1. Capture an RGB image of the finger at fixed distance (~7 cm) and angle (perpendicular). 2. Pick a square ROI within the image (typically 3024×3024, 8-bit JPEG). 3. Normalize RGB to [0, 1] and gamma-linearize per the sRGB transfer function. 4. Convert linear RGB to **CIE XYZ** (1931 standard observer). 5. Normalize by the D65 reference white and convert to **CIELAB** with the standard nonlinearity. 6. Compute ITA per pixel and average across the ROI. Compared against a benchtop DSM-4 colorimeter, smartphone-derived ITA agreed reasonably well — but only under controlled conditions: - **Anatomic site matters.** The dorsal side of the finger spans a wider ITA range than the palmar side, making it more discriminative for skin-tone classification. - **Exposure drift.** As exposure increases, the ITA distribution shifts toward more positive values and broadens — your skin "lightens" numerically even though biology hasn't changed. - **Lighting matters.** The most stable agreement with the reference colorimeter came under **ambient lights off, flash off**, with exposure ≈ 0.7 in their setup. The takeaway: smartphone colorimetry is a feasible path to scalable, low-cost skin-tone assessment — but you have to *fix* exposure, geometry, white balance, and ambient lighting, and ideally calibrate per-device. A free-running auto-exposure smartphone capture is essentially uncalibrated. ## A note on Fitzpatrick and Monk Two qualitative skin-color scales worth knowing about: - **Fitzpatrick (I–VI)**: classifies skin by its tanning/burning response to UV. Widely used in dermatology, but only six bins, with the upper end (V, VI) covering a huge range of darker pigmentations. - **Monk Scale (1–10)**: developed by Ellis Monk, [adopted by Google](https://skintone.google/) for technology evaluation. Ten shades, with broader coverage of darker tones. Research suggests it's more inclusive and more reliable for human-rater classification of medical and consumer technologies. Neither replaces a quantitative measurement (DRS, colorimeter, ITA), but Monk in particular is a reasonable choice when you need a discrete categorical variable — for stratifying clinical trials, training datasets, or human-rater protocols. ## Take Aways: - **The pulse-ox bias has a clean physical explanation.** It isn't a black box. Once you've sat with the melanin and hemoglobin absorption spectra, the bias is almost predictable. - **One number is rarely enough.** ITA is a one-dimensional summary of a two-dimensional space; collapsing $$L^*$$ and $$b^*$$ into a single angle confounds melanin with blood volume in ways that matter clinically. - **Practical smartphone-based measurement is feasible.** The hard part is consistency — fixed geometry, fixed exposure, controlled lighting. Anything that varies the optical path or the camera response variance becomes an uncontrolled covariate. - **Calibration is upstream of fairness.** A lot of the conversation around algorithmic bias in medical imaging starts with the model. The deeper problem is often that the *measurement itself* is pigmentation-dependent before any algorithm sees the data. This is also why I find it useful in my own work to treat skin tone (and exposure, and finger curvature, and ambient light) as **first-class design variables** — not as nuisance factors to correct for downstream. --- ### Key references - Sjoding et al. *Racial Bias in Pulse Oximetry Measurement.* NEJM, 2020. [doi:10.1056/NEJMc2029240](https://doi.org/10.1056/NEJMc2029240) - Vasudevan et al. *Melanometry for objective evaluation of skin pigmentation in pulse oximetry studies.* Communications Medicine, 2024. [doi:10.1038/s43856-024-00550-7](https://doi.org/10.1038/s43856-024-00550-7) - Bajrami et al. *Human Skin Models in Biophotonics.* Adv. Healthcare Materials, 2025. [doi:10.1002/adhm.202501894](https://doi.org/10.1002/adhm.202501894) - Harunani et al. *Establishing a causal link between the physiologic range of skin chromophore concentrations and physiologically relevant regions of CIELAB color space.* SPIE 13317, 2025. [doi:10.1117/12.3044143](https://doi.org/10.1117/12.3044143) - Burrow et al. *Smartphone tristimulus colorimetry for skin-tone analysis at common pulse oximetry anatomical sites.* Biophoton. Discovery, 2025. [doi:10.1117/1.BIOS.2.3.032504](https://doi.org/10.1117/1.BIOS.2.3.032504) - Putcha et al. *Characterizing the influence of skin pigmentation on pulse oximetry.* Biophoton. Discovery, 2025. [doi:10.1117/1.BIOS.2.3.032506](https://doi.org/10.1117/1.BIOS.2.3.032506) - Del Bino &amp; Bernerd. *Variations in skin colour and the biological consequences of UV exposure.* British Journal of Dermatology, 2013. [doi:10.1111/bjd.12529](https://doi.org/10.1111/bjd.12529) [Open Oximetry: Skin Color Quantification](https://openoximetry.org/skin-color-quantification/) is also a great living reference for this topic, with up-to-date reviews of the measurement landscape. </div>]]></content><author><name></name></author><category term="notes"/><category term="optics"/><category term="biophotonics"/><category term="colorimetry"/><category term="journal-club"/><summary type="html"><![CDATA[Journal club on how light interacts with skin, why pulse oximeters under-perform on darkly pigmented patients, and how we can quantify skin pigmentation objectively.]]></summary></entry></feed>