Toolboxes

photbiochem/

py:
  • __init__.py

  • cie_tn003_2015.py

  • ASNZS_1680_2_5_1997_COI.py

  • circadian_CS_CLa_lrc.py

namespace:

luxpy.photbiochem

Module for calculating CIE (S026:2018 & TN003:2015) photobiological quantities

(Eelc, Eemc, Eesc, Eer, Eez, and Elc, Emc, Esc, Er, Ez)

Photoreceptor

Photopigment (label, α)

Spectral efficiency sα(λ)

Quantity (α-opic irradiance)

Q-symbol (Ee,α)

Unit symbol

l-cone

photopsin (lc)

erythrolabe

erythropic

Ee,lc

W.m−2

m-cone

photopsin (mc)

chlorolabe

chloropic

Ee,mc

W.m−2

s-cone

photopsin (sc)

cyanolabe

cyanopic

Ee,sc

W.m−2

rod

rhodopsin (r)

rhodopic

rhodopic

Ee,r

W.m−2

ipRGC

melanopsin (z)

melanopic

melanopic

Ee,z

W.m−2

CIE recommends that the α-opic irradiance is determined by convolving the spectral
irradiance, Ee,λ(λ) (W⋅m−2), for each wavelength, with the action spectrum, sα(λ),
where sα(λ) is normalized to one at its peak:

Ee,α = ∫ Ee,λ(λ) sα(λ) dλ

where the corresponding units are W⋅m−2 in each case.

The equivalent luminance is calculated as:

E,α = Km ⋅ ∫ Ee,λ(λ) sα(λ) dλ ⋅ ∫ V(λ) dλ / ∫ sα(λ) dλ

To avoid ambiguity, the weighting function used must be stated, so, for example,
cyanopic refers to the cyanopic irradiance weighted using
the s-cone or ssc(λ) spectral efficiency function.
_PHOTORECEPTORS:

[‘l-cone’, ‘m-cone’,’s-cone’, ‘rod’, ‘iprgc’]

_Ee_SYMBOLS:

[‘Ee,lc’,’Ee,mc’, ‘Ee,sc’,’Ee,r’, ‘Ee,z’]

_E_SYMBOLS:

[‘E,lc’,’E,mc’, ‘E,sc’,’E,r’, ‘E,z’]

_Q_SYMBOLS:

[‘Q,lc’,’Q,mc’, ‘Q,sc’,’Q,r’, ‘Q,z’]

_Ee_UNITS:

[‘W⋅m−2’] * 5

_E_UNITS:

[‘lux’] * 5

_Q_UNITS:

[‘photons/m2/s’] * 5

_QUANTITIES:
list with actinic types of irradiance, illuminance
[‘erythropic’,
‘chloropic’,
‘cyanopic’,
‘rhodopic’,
‘melanopic’]
_ACTIONSPECTRA:
ndarray with default CIE-S026:2018 alpha-actinic action spectra. (stored in file:
‘./data/cie_S026_2018_SI_action_spectra_CIEToolBox_v1.049.dat’)
_ACTIONSPECTRA_CIES026:
ndarray with alpha-actinic action spectra. (stored in file:
‘./data/cie_S026_2018_SI_action_spectra_CIEToolBox_v1.049.dat’)
_ACTIONSPECTRA_CIETN003:
ndarray with CIE-TN003:2015 alpha-actinic action spectra. (stored in file:
‘./data/cie_tn003_2015_SI_action_spectra.dat’)
spd_to_aopicE():
Calculate alpha-opic irradiance (Ee,α) and equivalent
luminance (Eα) values for the l-cone, m-cone, s-cone,
rod and iprgc (α) photoreceptor cells following
CIE S026:2018 (= default actionspectra) or CIE TN003:2015.
spd_to_aopicEDI():
Calculate alpha-opic equivalent daylight (D65) illuminance (lx)
for the l-cone, m-cone, s-cone, rod and iprgc (α) photoreceptor cells.
spd_to_aopicDER():
Calculate α-opic Daylight (D65) Efficacy Ratio
for the l-cone, m-cone, s-cone, rod and iprgc (α) photoreceptor cells.
spd_to_aopicELR():
Calculate α-opic Efficacy of Luminous Radiation
for the l-cone, m-cone, s-cone, rod and iprgc (α) photoreceptor cells.
References:

1. CIE-S026:E2018 (2018). CIE System for Metrology of Optical Radiation for ipRGC-Influenced Responses to Light (Vienna, Austria). (https://files.cie.co.at/CIE%20S%20026%20alpha-opic%20Toolbox%20User%20Guide.pdf)

2. CIE-TN003:2015 (2015). Report on the first international workshop on circadian and neurophysiological photometry, 2013 (Vienna, Austria). (http://files.cie.co.at/785_CIE_TN_003-2015.pdf)

Module for calculation of cyanosis index (AS/NZS 1680.2.5:1997)

_COI_OBS:

Default CMF set for calculations

_COI_CSPACE:

Default color space (CIELAB)

_COI_RFL_BLOOD:

ndarray with reflectance spectra of 100% and 50% oxygenated blood

spd_to_COI_ASNZS1680:

Calculate the Cyanosis Observartion Index (COI) [ASNZS 1680.2.5-1995]

Reference:

AS/NZS1680.2.5 (1997). INTERIOR LIGHTING PART 2.5: HOSPITAL AND MEDICAL TASKS.

Module for Blue light hazard calculations

_BLH:

Blue Light Hazard function

spd_to_blh_eff():

Calculate Blue Light Hazard efficacy (K) or efficiency (eta) of radiation.

References:
  1. IEC 62471:2006, 2006, Photobiological safety of lamps and lamp systems.

  2. IEC TR 62778, 2014, Application of IEC 62471 for the assessment of blue light hazard to light sources and luminaires.

luxpy.toolboxes.photbiochem.spd_to_aopicE(sid, Ee=None, E=None, Q=None, cieobs='1931_2', K=None, sid_units='W/m2', out='Eeas', actionspectra='CIE-S026', interp_settings=None)[source]

Calculate alpha-opic irradiance (Ee,α) values (W/m²) for the l-cone, m-cone, s-cone, rod and iprgc (α) photoreceptor cells following CIE S026:2018.

Args:
sid:
numpy.ndarray with retinal spectral irradiance in :sid_units:
(if ‘uW/cm2’, sid will be converted to SI units ‘W/m2’)
Ee:
None, optional
If not None: normalize :sid: to an irradiance of :Ee:
E:
None, optional
If not None: normalize :sid: to an illuminance of :E:
Note that E is calculate using a Km factor corrected to standard air.
Q:
None, optional
If not None: Normalize :sid: to a quantal energy of :Q:
cieobs:
_CIEOBS or str, optional
Type of cmf set to use for photometric units.
sid_units:
‘W/m2’, optional
Other option ‘uW/m2’, input units of :sid:
out:
‘Eeas’ or str, optional
Determines values to return.
(to get also get equivalent illuminance Eα set :out: to ‘Eeas,Eas’)
actionspectra:
‘CIE-S026’, optional
Actionspectra to use in calculation
options:
- ‘CIE-S026’: will use action spectra as defined in CIE S026
- ‘CIE-TN003’: will use action spectra as defined in CIE TN003
Returns:
returns:
Eeas a numpy.ndarray with the α-opic irradiance
of all spectra in :sid: in SI-units (W/m²).

(other choice can be set using :out:)
References:

1. CIE-S026:E2018 (2018). CIE System for Metrology of Optical Radiation for ipRGC-Influenced Responses to Light (Vienna, Austria). (https://files.cie.co.at/CIE%20S%20026%20alpha-opic%20Toolbox%20User%20Guide.pdf)

2. CIE-TN003:2015 (2015). Report on the first international workshop on circadian and neurophysiological photometry, 2013 (Vienna, Austria). (http://files.cie.co.at/785_CIE_TN_003-2015.pdf)

luxpy.toolboxes.photbiochem.spd_to_aopicEDI(sid, Ee=None, E=None, Q=None, cieobs='1931_2', K=None, sid_units='W/m2', actionspectra='CIE-S026', ref='D65', out='a_edi', interp_settings=None)[source]

Calculate alpha-opic equivalent daylight (D65) illuminance (lux) for the l-cone, m-cone, s-cone, rod and iprgc (α) photoreceptor cells.

Args:
sid:
numpy.ndarray with retinal spectral irradiance in :sid_units:
(if ‘uW/cm2’, sid will be converted to SI units ‘W/m2’)
Ee:
None, optional
If not None: normalize :sid: to an irradiance of :Ee:
E:
None, optional
If not None: normalize :sid: to an illuminance of :E:
Note that E is calculate using a Km factor corrected to standard air.
Q:
None, optional
If not None: nNormalize :sid: to a quantal energy of :Q:
cieobs:
_CIEOBS or str, optional
Type of cmf set to use for photometric units.
sid_units:
‘W/m2’, optional
Other option ‘uW/m2’, input units of :sid:
actionspectra:
‘CIE-S026’, optional
Actionspectra to use in calculation
options:
- ‘CIE-S026’: will use action spectra as defined in CIE S026
- ‘CIE-TN003’: will use action spectra as defined in CIE TN003
ref:
‘D65’, optional
Reference (daylight) spectrum to use. (‘D65’ or ‘E’ or ndarray)
out:
‘Eeas, Eas’ or str, optional
Determines values to return.
Returns:
returns:
ndarray with the α-opic Equivalent Daylight Illuminance (lux) with the
for the l-cone, m-cone, s-cone, rod and iprgc photoreceptors
of all spectra in :sid: in SI-units.
luxpy.toolboxes.photbiochem.spd_to_aopicDER(sid, cieobs='1931_2', K=None, sid_units='W/m2', actionspectra='CIE-S026', ref='D65', interp_settings=None)[source]

Calculate α-opic Daylight (D65) Efficacy Ratio (= α-opic Daylight (D65) Efficiency) for the l-cone, m-cone, s-cone, rod and iprgc (α) photoreceptor cells.

Args:
sid:
numpy.ndarray with retinal spectral irradiance in :sid_units:
(if ‘uW/cm2’, sid will be converted to SI units ‘W/m2’)
cieobs:
_CIEOBS or str, optional
Type of cmf set to use for photometric units.
sid_units:
‘W/m2’, optional
Other option ‘uW/m2’, input units of :sid:
actionspectra:
‘CIE-S026’, optional
Actionspectra to use in calculation
options:
- ‘CIE-S026’: will use action spectra as defined in CIE S026
- ‘CIE-TN003’: will use action spectra as defined in CIE TN003
ref:
‘D65’, optional
Reference (daylight) spectrum to use. (‘D65’ or ‘E’ or ndarray)
Returns:
returns:
ndarray with the α-opic Daylight Efficacy Ratio with the
for the l-cone, m-cone, s-cone, rod and iprgc photoreceptors
of all spectra in :sid: in SI-units.
luxpy.toolboxes.photbiochem.spd_to_aopicELR(sid, cieobs='1931_2', K=None, sid_units='W/m2', actionspectra='CIE-S026', interp_settings=None)[source]

Calculate α-opic Efficacy of Luminous Radiation (W/lm) for the l-cone, m-cone, s-cone, rod and iprgc (α) photoreceptor cells.

Args:
sid:
numpy.ndarray with retinal spectral irradiance in :sid_units:
(if ‘uW/cm2’, sid will be converted to SI units ‘W/m2’)
cieobs:
_CIEOBS or str, optional
Type of cmf set to use for photometric units.
sid_units:
‘W/m2’, optional
Other option ‘uW/m2’, input units of :sid:
actionspectra:
‘CIE-S026’, optional
Actionspectra to use in calculation
options:
- ‘CIE-S026’: will use action spectra as defined in CIE S026
- ‘CIE-TN003’: will use action spectra as defined in CIE TN003
Returns:
returns:
ndarray with the α-opic Efficacy of Luminous Radiation (W/lm) with the
for the l-cone, m-cone, s-cone, rod and iprgc photoreceptors
of all spectra in :sid: in SI-units.
luxpy.toolboxes.photbiochem.spd_to_COI_ASNZS1680(S=None, tf='lab', cieobs='1931_2', out='COI,cct', extrapolate_rfl=False)[source]

Calculate the Cyanosis Observation Index (COI) [ASNZS 1680.2.5-1995].

Args:
S:
ndarray with light source spectrum (first column are wavelengths).
tf:
_COI_CSPACE, optional
Color space in which to calculate the COI.
Default is CIELAB.
cieobs:
_COI_CIEOBS, optional
CMF set to use.
Default is ‘1931_2’.
out:
‘COI,cct’ or str, optional
Determines output.
extrapolate_rfl:
False, optional
If False:
limit the wavelength range of the source to that of the standard
reflectance spectra for the 50% and 100% oxygenated blood.
Returns:
COI:
ndarray with cyanosis indices for input sources.
cct:
ndarray with correlated color temperatures.
Note:

Clause 7.2 of the ASNZS 1680.2.5-1995. standard mentions the properties demanded of the light source used in region where visual conditions suitable to the detection of cyanosis should be provided:

1. The correlated color temperature (CCT) of the source should be from 3300 to 5300 K.

  1. The cyanosis observation index should not exceed 3.3

luxpy.toolboxes.photbiochem.spd_to_CS_CLa_lrc(El=None, version='CLa2.0', E=None, sum_sources=False, interpolate_sources=True, t_CS=1.0, f_CS=1.0)[source]

Calculate Circadian Stimulus (CS) and Circadian Light (CLa, CLa2.0).

Args:
El:
ndarray, optional
Defaults to D65
light source spectral irradiance distribution
version:
‘CLa2.0’, optional
CLa version to calculate
Options:
- ‘CLa1.0’: Rea et al. 2012
- ‘CLa2.0’: Rea et al. 2021, 2022
E:
None, float or ndarray, optional
Illuminance of light sources.
If None: El is used as is, otherwise El is renormalized to have
an illuminance equal to E.
sum_sources:
False, optional
- False: calculate CS (1.0,2.0) and CLa (1.0, 2.0) for all sources in El array.
- True: sum sources in El to a single source and perform calc.
interpolate_sources:
True, optional
- True: El is interpolated to wavelength range of efficiency
functions (as in LRC calculator).
- False: interpolate efficiency functions to source range.
Source interpolation is not recommended due to possible
errors for peaky spectra.
(see CIE15-2018, “Colorimetry”).
t_CS:
1.0, optional
The duration factor (in hours): a continuous value from 0.5 to 3.0
f_CS:
1.0, optional
The spatial distribution factor: a discrete value (2, 1, or 0.5)
depending upon the spatial distribution of the light source.
Default = 1 (for t = 1 h, CS is equal to the 2012 version).
Options:
- 2.0: full visual field, as with a Ganzfeld.
- 1.0: central visual field, as with a discrete light box on a desk.
- 0.5: superior visual field, as from ceiling mounted down-light fixtures.
Returns:
CS:
ndarray with Circadian stimulus values
CLa:
ndarray with Circadian Light values
Notes on CLa1.0 (2012 version):

1. The original 2012 (E.q. 1) had set the peak wavelength of the melanopsin at 480 nm. Rea et al. later published a corrigendum with updated model parameters for k, a_{b-y} and a_rod. The comparison table between showing values calculated for a number of sources with the old and updated parameters were very close (~1 unit voor CLa).

2. In that corrrection paper they did not mention a change in the factor (1622) that multiplies the (sum of) the integral(s) in Eq. 1. HOWEVER, the excel calculator released in 2017 and the online calculator show that factor to have a value of 1547.9. The change in values due to the new factor is much larger than their the updated mentioned in note 1!

3. For reasons of consistency the calculator uses the latest model parameters, as could be read from the excel calculator. They values adopted are: multiplier 1547.9, k = 0.2616, a_{b-y} = 0.7 and a_rod = 3.3.

4. The parameter values to convert CLa to CS were also taken from the 2017 excel calculator.

Notes on CLa2.0 (2021 version):

1. In the original model, 1000 lux of CIE Illuminant A resulted in a CLA = 1000. In the revised model, a photopic illuminance of 1000 lux from CIE Illuminant A (approximately that of an incandescent lamp operated at 2856 K) results in a CLA 2.0 = 813. The value of 813 CLA 2.0 should be used by those wishing to calibrate instrumentation designed to report CLA 2.0 and CS. CLA 2.0 values can still be used to approximate the photopic illuminance, in lux, from a nonspecific “white” light source. For comparison, CLA 2.0 values should be multiplied by 1.23 to estimate the equivalent photopic illuminance from CIE Illuminant A, or by 0.66 to estimate the equivalent photopic illuminance from CIE Illuminant D65 (an approximation of daylight with a CCT of 6500 K).

2. Nov. 6, 2021: To get a value of CLa2.0 = 813, Eq. 3 from the paper must be adjusted to also divide by the transmision of the macula (‘mp’ in paper) the S-cone and Vlambda functions prior to calculating the integrals in the denominators of the first factor after the a_rod_1 and a_rod_2 scalars! Failure to do so results in a CLa2.0 of 800, instead of the reported 813 by the online calculator. Verification of the code on github shows indeed that these denominators are calculated by using the macular transmission divided S-cone and Vlambda functions. Is this an error in the code or in the paper?

3. Feb. 22, 2022: A corrigendum has been released for Eq. 3 in the original paper, where the normalization is indeed done.

4. Feb. 22, 2022: While the rodsat value in the corrigendum is defined as 6.50 W/m², this calculator uses the value as used in the online calculator: 6.5215 W/m². (see code base on github:)

References:

1. LRC Online Circadian stimulus calculator

2. LRC Excel based Circadian stimulus calculator.

3. Rea MS, Figueiro MG, Bierman A, and Hamner R (2012). Modelling the spectral sensitivity of the human circadian system. Light. Res. Technol. 44, 386–396.

4. Rea MS, Figueiro MG, Bierman A, and Hamner R (2012). Erratum: Modeling the spectral sensitivity of the human circadian system (Lighting Research and Technology (2012) 44:4 (386-396)). Light. Res. Technol. 44, 516.

5. Rea, M. S., Nagare, R., & Figueiro, M. G. (2021). Modeling Circadian Phototransduction: Quantitative Predictions of Psychophysical Data. Frontiers in Neuroscience, 15, 44.

6. Rea, M. S., Nagare, R., & Figueiro, M. G. (2022). Corrigendum: Modeling Circadian Phototransduction: Quantitative Predictions of Psychophysical Data. Frontiers in Neuroscience, 16.

7. LRC Online Circadian stimulus calculator (CLa2.0, 2021)

8. Github code: LRC Online Circadian stimulus calculator (CLa2.0, accessed Nov. 5, 2021)

luxpy.toolboxes.photbiochem.CLa_to_CS(CLa, t=1, f=1, forward=True)[source]

Convert Circadian Light to Circadian Stimulus (and back).

Args:
CLa:
ndarray with Circadian Light values
or Circadian Stimulus values (if forward == False)
t:
1.0, optional
The duration factor (in hours): a continuous value from 0.5 to 3.0
f:
1.0, optional
The spatial distribution factor: a discrete value (2, 1, or 0.5)
depending upon the spatial distribution of the light source.
Default = 1 (for t = 1 h, CS is equal to the 2012 version).
Options:
- 2.0: full visual field, as with a Ganzfeld.
- 1.0: central visual field, as with a discrete light box on a desk.
- 0.5: superior visual field, as from ceiling mounted down-light fixtures.
forward:
True, optional
If True: convert CLa to CS values.
If False: convert CS values to CLa values.
Returns:
CS:
ndarray with CS values or with CLa values (if forward == False)
References:

1. Rea MS, Figueiro MG, Bierman A, and Hamner R (2012). Modelling the spectral sensitivity of the human circadian system. Light. Res. Technol. 44, 386–396.

2. Rea MS, Figueiro MG, Bierman A, and Hamner R (2012). Erratum: Modeling the spectral sensitivity of the human circadian system (Lighting Research and Technology (2012) 44:4 (386-396)). Light. Res. Technol. 44, 516.

3. Rea, M. S., Nagare, R., & Figueiro, M.G. (2021). Modeling Circadian Phototransduction: Quantitative Predictions of Psychophysical Data. Frontiers in Neuroscience, 15, 44.

4. LRC Online Circadian Stimulus calculator (CLa2.0, 2021)

luxpy.toolboxes.photbiochem.spd_to_blh_eff(spd, efficacy=True, cieobs='1931_2', src='dict', K=None)[source]

Calculate Blue Light Hazard efficacy (K) or efficiency (eta) of radiation.

Args:
S:
ndarray with spectral data
cieobs:
str, optional
Sets the type of Vlambda function to obtain.
src:
‘dict’ or array, optional
- ‘dict’: get from ybar from _CMF
- ‘array’: ndarray in :cieobs:
Determines whether to load cmfs from file (./data/cmfs/)
or from dict defined in .cmf.py
Vlambda is obtained by collecting Ybar.
K:
None, optional
e.g. K = 683 lm/W for ‘1931_2’ (relative == False)
or K = 100/sum(spd*dl) (relative == True)
Returns:
eff:
ndarray with blue light hazard efficacy or efficiency of radiation values.
References:
  1. IEC 62471:2006, 2006, Photobiological safety of lamps and lamp systems.

  2. IEC TR 62778, 2014, Application of IEC 62471 for the assessment of blue light hazard to light sources and luminaires.

indvcmf/

py:
  • __init__.py

  • individual_observer_cmf_model.py

namespace:

luxpy.indvcmf

Module for Individual Observer lms-CMFs (Asano, 2016 and CIE TC1-97)

_DATA_PATH:

path to data files

_DATA:

Dict with required data

_DSRC_STD_DEF:

default data source for stdev of physiological data (‘matlab’, ‘germany’)

_DSRC_LMS_ODENS_DEF:

default data source for lms absorbances and optical densities (‘asano’, ‘cietc197’)

_LMS_TO_XYZ_METHOD:

default method to calculate lms to xyz conversion matrix (‘asano’, ‘cietc197’)

_WL_CRIT:

critical wavelength above which interpolation of S-cone data fails.

_WL:

default wavelengths of spectral data in INDVCMF_DATA.

load_database():

Load a database with parameters and data required by the Asano model.

init():

Initialize: load database required for Asano Individual Observer Model into the default _DATA dict and set some options for rounding, sign. figs and chopping small value to zero; for source data to use for spectral data for LMS absorp. and optical densities, …

query_state():

print current settings for global variables.

compute_cmfs():

Generate Individual Observer CMFs (cone fundamentals) based on CIE2006 cone fundamentals and published literature on observer variability in color matching and in physiological parameters (Use of Asano optical data and model; or of CIE TC1-91 data and ‘variability’-extended model possible).

cie2006cmfsEx():

Generate Individual Observer CMFs (cone fundamentals) based on CIE2006 cone fundamentals and published literature on observer variability in color matching and in physiological parameters. (Use of Asano optical data and model; or of CIE TC1-91 data and ‘variability’-extended model possible)

getMonteCarloParam():

Get dict with normally-distributed physiological factors for a population of observers.

getUSCensusAgeDist():

Get US Census Age Distribution

genMonteCarloObs():

Monte-Carlo generation of individual observer color matching functions (cone fundamentals) for a certain age and field size.

getCatObs():

Generate cone fundamentals for categorical observers.

get_lms_to_xyz_matrix():

Calculate lms to xyz conversion matrix for a specific field size determined as a weighted combination of the 2° and 10° matrices.

lmsb_to_xyzb():

Convert from LMS cone fundamentals to XYZ CMFs using conversion matrix determined as a weighted combination of the 2° and 10° matrices.

add_to_cmf_dict():

Add set of cmfs to _CMF dict.

plot_cmfs():

Plot cmf set.

References

Notes

1. Port of Matlab code from: https://www.rit.edu/cos/colorscience/re_AsanoObserverFunctions.php (Accessed April 20, 2018) 2. Adjusted/extended following CIE TC1-97 Python code (and data): github.com/ifarup/ciefunctions (Copyright (C) 2012-2017 Ivar Farup and Jan Henrik Wold) (Accessed Dec 18, 2019)

luxpy.toolboxes.indvcmf.load_database(wl=None, dsrc_std=None, dsrc_lms_odens=None, path=None)[source]

Load database required for Asano Individual Observer Model.

Args:
wl:
None, optional
Wavelength range to interpolate data to.
None defaults to the wavelength range associated with data in :dsrc_lms_odens:
path:
None, optional
Path where data files are stored (If None: look in ./data/ folder under toolbox path)
dsrc_std:
None, optional
Data source (‘matlab’ code, or ‘germany’) for stdev data on physiological factors.
None defaults to string in _DSRC_STD_DEF
dsrc_lms_odens:
None, optional
Data source (‘asano’, ‘cietc197’) for LMS absorbance and optical density data.
None defaults to string in _DSRC_LMS_ODENS_DEF
Returns:
data:
dict with data for:
- ‘LMSa’: LMS absorbances
- ‘rmd’: relative macular pigment density
- ‘docul’: ocular media optical density
- ‘USCensus2010population’: data (age and numbers) on a 2010 US Census
- ‘CatObsPfctr’: dict with iteratively derived Categorical Observer physiological stdevs.
- ‘M2d’: Asano 2° lms to xyz conversion matrix
- ‘M10d’: Asano 10° lms to xyz conversion matrix
- standard deviations on physiological parameters: ‘od_lens’, ‘od_macula’, ‘od_L’, ‘od_M’, ‘od_S’, ‘shft_L’, ‘shft_M’, ‘shft_S’
luxpy.toolboxes.indvcmf.init(wl=None, dsrc_std=None, dsrc_lms_odens=None, lms_to_xyz_method=None, use_sign_figs=True, use_my_round=True, use_chop=True, path=None, out=None, verbosity=1)[source]

Initialize: load database required for Asano Individual Observer Model into the default _DATA dict and set some options for rounding, sign. figs and chopping small value to zero; for source data to use for spectral data for LMS absorp. and optical desnities, …

Args:
wl:
None, optional
Wavelength range to interpolate data to.
None defaults to the wavelength range associated with data in :dsrc_lms_odens:
dsrc_std:
None, optional
Data source (‘matlab’ code, or ‘germany’) for stdev data on physiological factors.
None defaults to string in _DSRC_STD_DEF
dsrc_lms_odens:
None, optional
Data source (‘asano’, ‘cietc197’) for LMS absorbance and optical density data.
None defaults to string in _DSRC_LMS_ODENS_DEF
lms_to_xyz_method:
None, optional
Method to use to determine lms-to-xyz conversion matrix (options: ‘asano’, ‘cietc197’)
use_my_round:
True, optional
If True: use my_rounding() conform CIE TC1-91 Python code ‘ciefunctions’. (slows down code)
by setting _USE_MY_ROUND.
use_sign_figs:
True, optional
If True: use sign_figs() conform CIE TC1-91 Python code ‘ciefunctions’. (slows down code)
by setting _USE_SIGN_FIGS.
use_chop:
True, optional
If True: use chop() conform CIE TC1-91 Python code ‘ciefunctions’. (slows down code)
by setting _USE_CHOP.
path:
None, optional
Path where data files are stored (If None: look in ./data/ folder under toolbox path)
out:
None, optional
If None: only set global variables, do not output _DATA.copy()
verbosity:
1, optional
Print new state of global settings.
Returns:
data:
if out is not None: return a dict with dict with data for:
- ‘LMSa’: LMS absorbances
- ‘rmd’: relative macular pigment density
- ‘docul’: ocular media optical density
- ‘USCensus2010population’: data (age and numbers) on a 2010 US Census
- ‘CatObsPfctr’: dict with iteratively derived Categorical Observer physiological stdevs.
- ‘M2d’: Asano 2° lms to xyz conversion matrix
- ‘M10d’: Asano 10° lms to xyz conversion matrix
- standard deviations on physiological parameters: ‘od_lens’, ‘od_macula’, ‘od_L’, ‘od_M’, ‘od_S’, ‘shft_L’, ‘shft_M’, ‘shft_S’
luxpy.toolboxes.indvcmf.query_state()[source]

Print current settings for ‘global variables’.

luxpy.toolboxes.indvcmf.cie2006cmfsEx(age=32, fieldsize=10, wl=None, var_od_lens=0, var_od_macula=0, var_od_L=0, var_od_M=0, var_od_S=0, var_shft_L=0, var_shft_M=0, var_shft_S=0, norm_type=None, out='lms', base=False, strategy_2=True, odata0=None, lms_to_xyz_method=None, allow_negative_values=False, normalize_lms_to_xyz_matrix=False)[source]

Generate Individual Observer CMFs (cone fundamentals) based on CIE2006 cone fundamentals and published literature on observer variability in color matching and in physiological parameters.

Args:
age:
32 or float or int, optional
Observer age
fieldsize:
10, optional
Field size of stimulus in degrees (between 2° and 10°).
wl:
None, optional
Interpolation/extraplation of :LMS: output to specified wavelengths.
None: output original _WL
var_od_lens:
0, optional
Std Dev. in peak optical density [%] of lens.
var_od_macula:
0, optional
Std Dev. in peak optical density [%] of macula.
var_od_L:
0, optional
Std Dev. in peak optical density [%] of L-cone.
var_od_M:
0, optional
Std Dev. in peak optical density [%] of M-cone.
var_od_S:
0, optional
Std Dev. in peak optical density [%] of S-cone.
var_shft_L:
0, optional
Std Dev. in peak wavelength shift [nm] of L-cone.
var_shft_L:
0, optional
Std Dev. in peak wavelength shift [nm] of M-cone.
var_shft_S:
0, optional
Std Dev. in peak wavelength shift [nm] of S-cone.
norm_type:
None, optional
- ‘max’: normalize LMSq functions to max = 1
- ‘area’: normalize to area
- ‘power’: normalize to power
out:
‘lms’ or ‘xyz’, optional
Determines output.
base:
False, boolean, optional
The returned energy-based LMS cone fundamentals given to the
precision of 9 sign. figs. if ‘True’, and to the precision of
6 sign. figs. if ‘False’.
strategy_2:
True, bool, optional
Use strategy 2 in github.com/ifarup/ciefunctions issue #121 for
computing the weighting factor. If false, strategy 3 is applied.
odata0:
None, optional
Dict with uncorrected ocular media and macula density functions and LMS absorptance functions
None defaults to the ones stored in _DATA
lms_to_xyz_method:
None, optional
Method to use to determine lms-to-xyz conversion matrix (options: ‘asano’, ‘cietc197’)
allow_negative_values:
False, optional
Cone fundamentals or color matching functions should not have negative values.
If False: X[X<0] = 0.
normalize_lms_to_xyz_matrix:
False, optional
Normalize that EEW is always at [100,100,100] in XYZ and LMS system.
Returns:
returns:
- ‘LMS’ [or ‘XYZ’]: ndarray with individual observer equal area-normalized
cone fundamentals. Wavelength have been added.

[- ‘M’: lms to xyz conversion matrix
- ‘trans_lens’: ndarray with lens transmission
(no interpolation)
- ‘trans_macula’: ndarray with macula transmission
(no interpolation)
- ‘sens_photopig’ : ndarray with photopigment sens.
(no interpolation)]
References:

1. Asano Y, Fairchild MD, and Blondé L, (2016), Individual Colorimetric Observer Model. PLoS One 11, 1–19.

2. Asano Y, Fairchild MD, Blondé L, and Morvan P (2016). Color matching experiment for highlighting interobserver variability. Color Res. Appl. 41, 530–539.

3. CIE TC1-36, (2006), Fundamental Chromaticity Diagram with Physiological Axes - Part I (Vienna: CIE).

4. Asano’s Individual Colorimetric Observer Model

5. CIE TC1-97 Python code for cone fundamentals and XYZ cmf calculations (by Ivar Farup and Jan Henrik Wold, (c) 2012-2017)

luxpy.toolboxes.indvcmf.getMonteCarloParam(n_obs=1, stdDevAllParam={'dsrc': 'matlab', 'od_L': 17.9, 'od_M': 17.9, 'od_S': 14.7, 'od_lens': 19.1, 'od_macula': 37.2, 'shft_L': 4.0, 'shft_M': 3.0, 'shft_S': 2.5})[source]

Get dict with normally-distributed physiological factors for a population of observers.

Args:
n_obs:
1, optional
Number of individual observers in population.
stdDevAllParam:
_DATA[‘stdev’], optional
Dict with parameters for:
[‘od_lens’, ‘od_macula’,
‘od_L’, ‘od_M’, ‘od_S’,
‘shft_L’, ‘shft_M’, ‘shft_S’]
Returns:
returns:
dict with n_obs randomly drawn parameters.
luxpy.toolboxes.indvcmf.genMonteCarloObs(n_obs=1, fieldsize=10, list_Age=[32], wl=None, norm_type=None, out='lms', base=False, strategy_2=True, odata0=None, lms_to_xyz_method=None, allow_negative_values=False)[source]

Monte-Carlo generation of individual observer cone fundamentals.

Args:
n_obs:
1, optional
Number of observer CMFs to generate.
list_Age:
list of observer ages or str, optional
Defaults to 32 (cfr. CIE2006 CMFs)
If ‘us_census’: use US population census of 2010
to generate list_Age.
fieldsize:
fieldsize in degrees (between 2° and 10°), optional
Defaults to 10°.
wl:
None, optional
Interpolation/extraplation of :LMS: output to specified wavelengths.
None: output original _WL
norm_type:
None, optional
- ‘max’: normalize LMSq functions to max = 1
- ‘area’: normalize to area
- ‘power’: normalize to power
out:
‘lms’ or ‘xyz’, optional
Determines output.
base:
False, boolean, optional
The returned energy-based LMS cone fundamentals given to the
precision of 9 sign. figs. if ‘True’, and to the precision of
6 sign. figs. if ‘False’.
strategy_2:
True, bool, optional
Use strategy 2 in github.com/ifarup/ciefunctions issue #121 for
computing the weighting factor. If false, strategy 3 is applied.
odata0:
None, optional
Dict with uncorrected ocular media and macula density functions and LMS absorptance functions
None defaults to the ones stored in _DATA
lms_to_xyz_method:
None, optional
Method to use to determine lms-to-xyz conversion matrix (options: ‘asano’, ‘cietc197’)
allow_negative_values:
False, optional
Cone fundamentals or color matching functions should not have negative values.
If False: X[X<0] = 0.
Returns:
returns:
LMS [,var_age, vAll]
- LMS: ndarray with population LMS functions.
- var_age: ndarray with population observer ages.
- vAll: dict with population physiological factors (see .keys())
References:

1. Asano Y., Fairchild M.D., and Blondé L., (2016), Individual Colorimetric Observer Model. PLoS One 11, 1–19.

2. Asano Y, Fairchild MD, Blondé L, and Morvan P (2016). Color matching experiment for highlighting interobserver variability. Color Res. Appl. 41, 530–539.

3. CIE TC1-36, (2006), Fundamental Chromaticity Diagram with Physiological Axes - Part I. (Vienna: CIE).

4. Asano’s Individual Colorimetric Observer Model

luxpy.toolboxes.indvcmf.getCatObs(n_cat=10, fieldsize=2, wl=None, norm_type=None, out='lms', base=False, strategy_2=True, odata0=None, lms_to_xyz_method=None, allow_negative_values=False)[source]

Generate cone fundamentals for categorical observers.

Args:
n_cat:
10, optional
Number of observer CMFs to generate.
fieldsize:
fieldsize in degrees (between 2° and 10°), optional
Defaults to 10°.
out:
‘LMS’ or str, optional
Determines output.
wl:
None, optional
Interpolation/extraplation of :LMS: output to specified wavelengths.
None: output original _WL
norm_type:
None, optional
- ‘max’: normalize LMSq functions to max = 1
- ‘area’: normalize to area
- ‘power’: normalize to power
out:
‘lms’ or ‘xyz’, optional
Determines output.
base:
False, boolean, optional
The returned energy-based LMS cone fundamentals given to the
precision of 9 sign. figs. if ‘True’, and to the precision of
6 sign. figs. if ‘False’.
strategy_2:
True, bool, optional
Use strategy 2 in github.com/ifarup/ciefunctions issue #121 for
computing the weighting factor. If false, strategy 3 is applied.
odata0:
None, optional
Dict with uncorrected ocular media and macula density functions and LMS absorptance functions
None defaults to the ones stored in _DATA
lms_to_xyz_method:
None, optional
Method to use to determine lms-to-xyz conversion matrix (options: ‘asano’, ‘cietc197’)
allow_negative_values:
False, optional
Cone fundamentals or color matching functions should not have negative values.
If False: X[X<0] = 0.
Returns:
returns:
LMS [,var_age, vAll]
- LMS: ndarray with population LMS functions.
- var_age: ndarray with population observer ages.
- vAll: dict with population physiological factors (see .keys())
Notes:

1. Categorical observers are observer functions that would represent color-normal populations. They are finite and discrete as opposed to observer functions generated from the individual colorimetric observer model. Thus, they would offer more convenient and practical approaches for the personalized color imaging workflow and color matching analyses. Categorical observers were derived in two steps. At the first step, 10000 observer functions were generated from the individual colorimetric observer model using Monte Carlo simulation. At the second step, the cluster analysis, a modified k-medoids algorithm, was applied to the 10000 observers minimizing the squared Euclidean distance in cone fundamentals space, and categorical observers were derived iteratively. Since the proposed categorical observers are defined by their physiological parameters and ages, their CMFs can be derived for any target field size. 2. Categorical observers were ordered by the importance; the first categorical observer vas the average observer equivalent to CIEPO06 with 38 year-old for a given field size, followed by the second most important categorical observer, the third, and so on.

  1. see: https://www.rit.edu/cos/colorscience/re_AsanoObserverFunctions.php

luxpy.toolboxes.indvcmf.compute_cmfs(fieldsize=10, age=32, wl=None, var_od_lens=0, var_od_macula=0, var_shft_LMS=[0, 0, 0], var_od_LMS=[0, 0, 0], norm_type=None, out='lms', base=False, strategy_2=True, odata0=None, lms_to_xyz_method=None, allow_negative_values=False, normalize_lms_to_xyz_matrix=False)[source]

Generate Individual Observer CMFs (cone fundamentals) based on CIE2006 cone fundamentals and published literature on observer variability in color matching and in physiological parameters.

Args:
age:
32 or float or int, optional
Observer age
fieldsize:
10, optional
Field size of stimulus in degrees (between 2° and 10°).
wl:
None, optional
Interpolation/extraplation of :LMS: output to specified wavelengths.
None: output original _WL
var_od_lens:
0, optional
Variation of optical density of lens.
var_od_macula:
0, optional
Variation of optical density of macula.
var_shft_LMS:
[0, 0, 0] optional
Variation (shift) of LMS peak absorptance.
var_od_LMS:
[0, 0, 0] optional
Variation of LMS optical densities.
norm_type:
None, optional
- ‘max’: normalize LMSq functions to max = 1
- ‘area’: normalize to area
- ‘power’: normalize to power
out:
‘lms’ or ‘xyz’, optional
Determines output.
base:
False, boolean, optional
The returned energy-based LMS cone fundamentals given to the
precision of 9 sign. figs. if ‘True’, and to the precision of
6 sign. figs. if ‘False’.
strategy_2:
True, bool, optional
Use strategy 2 in github.com/ifarup/ciefunctions issue #121 for
computing the weighting factor. If false, strategy 3 is applied.
odata0:
None, optional
Dict with uncorrected ocular media and macula density functions and LMS absorptance functions
None defaults to the ones stored in _DATA
lms_to_xyz_method:
None, optional
Method to use to determine lms-to-xyz conversion matrix (options: ‘asano’, ‘cietc197’)
allow_negative_values:
False, optional
Cone fundamentals or color matching functions should not have negative values.
If False: X[X<0] = 0.
normalize_lms_to_xyz_matrix:
False, optional
Normalize that EEW is always at [100,100,100] in XYZ and LMS system.
Returns:
returns:
- ‘LMS’ [or ‘XYZ’]: ndarray with individual observer equal area-normalized
cone fundamentals. Wavelength have been added.

[- ‘M’: lms to xyz conversion matrix
- ‘trans_lens’: ndarray with lens transmission
(no interpolation)
- ‘trans_macula’: ndarray with macula transmission
(no interpolation)
- ‘sens_photopig’ : ndarray with photopigment sens.
(no interpolation)]
References:

1. Asano Y, Fairchild MD, and Blondé L, (2016), Individual Colorimetric Observer Model. PLoS One 11, 1–19.

2. Asano Y, Fairchild MD, Blondé L, and Morvan P (2016). Color matching experiment for highlighting interobserver variability. Color Res. Appl. 41, 530–539.

3. CIE, TC1-36, (2006). Fundamental Chromaticity Diagram with Physiological Axes - Part I (Vienna: CIE).

4. Asano’s Individual Colorimetric Observer Model

5. CIE TC1-97 Python code for cone fundamentals and XYZ cmf calculations (by Ivar Farup and Jan Henrik Wold, (c) 2012-2017)

luxpy.toolboxes.indvcmf.add_to_cmf_dict(bar=None, cieobs='indv', K=683, M=array([[1.0000e+00, 0.0000e+00, 0.0000e+00], [0.0000e+00, 1.0000e+00, 0.0000e+00], [0.0000e+00, 0.0000e+00, 1.0000e+00]]))[source]

Add set of cmfs to _CMF dict.

Args:
bar:
None, optional
Set of CMFs. None: initializes to empty ndarray.
cieobs:
‘indv’ or str, optional
Name of CMF set.
K:
683 (lm/W), optional
Conversion factor from radiometric to photometric quantity.
M:
np.eye, optional
Matrix for lms to xyz conversion.
luxpy.toolboxes.indvcmf.plot_cmfs(cmf, axh=None, **kwargs)[source]

Plot cmf set.

luxpy.toolboxes.indvcmf.my_round(x, n=0)[source]

Round array x to n decimal points using round half away from zero. This function is needed because the rounding specified in the CIE recommendation is different from the standard rounding scheme in python (which is following the IEEE recommendation).

Args:
x:
ndarray
Array to be rounded
n:
int
Number of decimal points
Returns:
y:
ndarray
Rounded array
luxpy.toolboxes.indvcmf.chop(arr, epsilon=1e-14)[source]

Chop values smaller than epsilon in absolute value to zero. Similar to Mathematica function.

Args:
arr:
float or ndarray
Array or number to be chopped.
epsilon:
float
Minimum number.
Returns:
chopped:
float or ndarray
Chopped numbers.
luxpy.toolboxes.indvcmf.sign_figs(x, n=0)[source]

Round x to n significant figures (not decimal points). This function is needed because the rounding specified in the CIE recommendation is different from the standard rounding scheme in python (which is following the IEEE recommendation). Uses my_round (above).

Args:
x:
int, float or ndarray
Number or array to be rounded.
Returns:
t:
float or ndarray
Rounded number or array.

spdbuild/

py:
  • __init__.py

  • spdbuilder.py

  • spdbuilder2020.py

  • spdoptimzer2020.py

namespace:

luxpy.spdbuild/

Module for building and optimizing SPDs

spdbuilder.py

Functions

gaussian_spd():

Generate Gaussian spectrum.

butterworth_spd():

Generate Butterworth based spectrum.

lorentzian2_spd():

Generate 2nd order Lorentzian based spectrum.

roundedtriangle_spd():

Generate a rounded triangle based spectrum.

mono_led_spd():

Generate monochromatic LED spectrum based on a Gaussian or butterworth profile or according to Ohno (Opt. Eng. 2005).

spd_builder():

Build spectrum based on Gaussians, monochromatic and/or phophor LED spectra.

color3mixer():

Calculate fluxes required to obtain a target chromaticity when (additively) mixing 3 light sources.

colormixer():

Calculate fluxes required to obtain a target chromaticity when (additively) mixing N light sources.

colormixer_pinv():

Additive color mixer of N primaries using using Moore-Penrose pseudo-inverse matrix.

spd_builder():

Build spectrum based on Gaussians, monochromatic and/or phophor LED-type spectra.

get_w_summed_spd():

Calculate weighted sum of spds.

fitnessfcn():

Fitness function that calculates closeness of solution x to target values for specified objective functions.

spd_constructor_2():

Construct spd from spectral model parameters using pairs of intermediate sources.

spd_constructor_3():

Construct spd from spectral model parameters using trio’s of intermediate sources.

spd_optimizer_2_3():

Optimizes the weights (fluxes) of a set of component spectra by combining pairs (2) or trio’s (3) of components to intermediate sources until only 3 remain. Color3mixer can then be called to calculate required fluxes to obtain target chromaticity and fluxes are then back-calculated.

get_optim_pars_dict():

Setup dict with optimization parameters.

initialize_spd_model_pars():

Initialize spd_model_pars (for spd_constructor) based on type of component_data.

initialize_spd_optim_pars():

Initialize spd_optim_pars (x0, lb, ub for use with math.minimizebnd) based on type of component_data.

spd_optimizer():

Generate a spectrum with specified white point and optimized for certain objective functions from a set of component spectra or component spectrum model parameters.

Module for building and optimizing SPDs (2)

This module implements a class based spectral optimizer. It differs from the spdoptimizer function in spdbuild.py, in that it can use several different minimization algorithms, as well as a user defined method. It is also written such that the user can easily write his own primary constructor function. It supports the ‘3mixer’ algorithm (but no ‘2mixer’) and a ‘no-mixer’ algorithm (chromaticity as part of the list of objectives) for calculating the mixing contributions of the primaries.

Functions

gaussian_prim_constructor():

constructs a gaussian based primary set.

_setup_wlr():

Initialize the wavelength range for use with PrimConstructor.

_extract_prim_optimization_parameters():

Extract the primary parameters from the optimization vector x and the pdefs dict for use with PrimConstructor.

_stack_wlr_spd():

Stack the wavelength range ‘on top’ of the spd values for use with PrimConstructor.

PrimConstructor:

class for primary (spectral) construction

Minimizer:

class for minimization of fitness of each of the objective functions

ObjFcns:

class to specify one or more objective functions for minimization

SpectralOptimizer:

class for spectral optimization (initialization and run)

spd_optimizer2():

Generate a spectrum with specified white point and optimized for certain objective functions from a set of component spectra or component spectrum model parameters (functional wrapper around SpectralOptimizer class).

Notes

  1. See examples below (in spdoptimizer2020.’__main__’) for use.

hypspcim/

py:
  • __init__.py

  • hyperspectral_img_simulator.py

namespace:

luxpy.hypspcim

Module for hyper spectral image simulation

_HYPSPCIM_PATH:

path to module

_HYPSPCIM_DEFAULT_IMAGE:

path + filename to default image

xyz_to_rfl():

approximate spectral reflectance of xyz based on k nearest neighbour interpolation of samples from a standard reflectance set.

render_image():

Render image under specified light source spd.

luxpy.toolboxes.hypspcim.render_image(img=None, spd=None, rfl=None, out='img_hyp', refspd=None, D=None, cieobs='1931_2', cspace='xyz', cspace_tf={}, CSF=None, interp_type='nd', k_neighbours=4, show=True, verbosity=0, show_ref_img=True, stack_test_ref=12, write_to_file=None, csf_based_rgb_rounding=6)[source]

Render image under specified light source spd.

Args:
img:
None or str or ndarray with float (max = 1) rgb image.
None load a default image.
spd:
ndarray, optional
Light source spectrum for rendering
If None: use CIE illuminant F4
rfl:
ndarray, optional
Reflectance set for color coordinate to rfl mapping.
out:
‘img_hyp’ or str, optional
(other option: ‘img_ren’: rendered image under :spd:)
refspd:
None, optional
Reference spectrum for color coordinate to rfl mapping.
None defaults to D65 (srgb has a D65 white point)
D:
None, optional
Degree of (von Kries) adaptation from spd to refspd.
cieobs:
_CIEOBS, optional
CMF set for calculation of xyz from spectral data.
cspace:
‘xyz’, optional
Color space for color coordinate to rfl mapping.
Tip: Use linear space (e.g. ‘xyz’, ‘Yuv’,…) for (interp_type == ‘nd’),
and perceptually uniform space (e.g. ‘ipt’) for (interp_type == ‘nearest’)
cspace_tf:
{}, optional
Dict with parameters for xyz_to_cspace and cspace_to_xyz transform.
CSF:
None, optional
RGB camera response functions.
If None: input :xyz: contains raw rgb values. Override :cspace:
argument and perform estimation directly in raw rgb space!!!
interp_type:
‘nd’, optional
Options:
- ‘nd’: perform n-dimensional linear interpolation using Delaunay triangulation.
- ‘nearest’: perform nearest neighbour interpolation.
k_neighbours:
4 or int, optional
Number of nearest neighbours for reflectance spectrum interpolation.
Neighbours are found using scipy.spatial.cKDTree
show:
True, optional
Show images.
verbosity:
0, optional
If > 0: make a plot of the color coordinates of original and rendered image pixels.
show_ref_img:
True, optional
True: shows rendered image under reference spd. False: shows
original image.
write_to_file:
None, optional
None: do nothing, else: write to filename(+path) in :write_to_file:
stack_test_ref:
12, optional
- 12: left (test), right (ref) format for show and imwrite
- 21: top (test), bottom (ref)
- 1: only show/write test
- 2: only show/write ref
- 0: show both, write test
csf_based_rgb_rounding:
_ROUNDING, optional
Int representing the number of decimals to round the RGB values (obtained from not-None CSF input) to before applying the search algorithm.
Smaller values increase the search speed, but could cause fatal error that causes python kernel to die. If this happens increase the rounding int value.
Returns:
returns:
img_hyp, img_ren,
ndarrays with float hyperspectral image and rendered images
luxpy.toolboxes.hypspcim.xyz_to_rfl(xyz, CSF=None, rfl=None, out='rfl_est', refspd=None, D=None, cieobs='1931_2', cspace='xyz', cspace_tf={}, interp_type='nd', k_neighbours_nd=1, k_neighbours=4, verbosity=0, csf_based_rgb_rounding=6)[source]

Approximate spectral reflectance of xyz values based on nd-dimensional linear interpolation or k nearest neighbour interpolation of samples from a standard reflectance set.

Args:
xyz:
ndarray with xyz values of target points.
CSF:
None, optional
RGB camera response functions.
If None: input :xyz: contains raw rgb (float) values. Override :cspace:
argument and perform estimation directly in raw rgb space!!!
rfl:
ndarray, optional
Reflectance set for color coordinate to rfl mapping.
out:
‘rfl_est’ or str, optional
refspd:
None, optional
Refer ence spectrum for color coordinate to rfl mapping.
None defaults to D65.
cieobs:
_CIEOBS, optional
CMF set used for calculation of xyz from spectral data.
cspace:
‘xyz’, optional
Color space for color coordinate to rfl mapping.
Tip: Use linear space (e.g. ‘xyz’, ‘Yuv’,…) for (interp_type == ‘nd’),
and perceptually uniform space (e.g. ‘ipt’) for (interp_type == ‘nearest’)
cspace_tf:
{}, optional
Dict with parameters for xyz_to_cspace and cspace_to_xyz transform.
interp_type:
‘nd’, optional
Options:
- ‘nd’: perform n-dimensional linear interpolation using Delaunay triangulation.
- ‘nearest’: perform nearest neighbour interpolation.
k_neighbours:
4 or int, optional
Number of nearest neighbours for reflectance spectrum interpolation.
Neighbours are found using scipy.spatial.cKDTree
k_neighbours_nd:
1, optional
Number of nearest neighbours for reflectance spectrum interpolation when interp_type ‘nd’ fails.$
If None: use the value set in :k_neighbours:
verbosity:
0, optional
If > 0: make a plot of the color coordinates of original and
rendered image pixels.
csf_based_rgb_rounding:
_ROUNDING, optional
Int representing the number of decimals to round the RGB values (obtained from not-None CSF input) to before applying the search algorithm.
Smaller values increase the search speed, but could cause fatal error that causes python kernel to die. If this happens increase the rounding int value.
Returns:
returns:
:rfl_est:
ndarrays with estimated reflectance spectra.
luxpy.toolboxes.hypspcim.get_superresolution_hsi(lrhsi, hrci, CSF, wl=[380, 780, 1], csf_based_rgb_rounding=6, interp_type='nd', k_neighbours=4, verbosity=0)[source]

Get a HighResolution HyperSpectral Image (super-resolution HSI) based on a LowResolution HSI and a HighResolution Color Image.

Args:
lrhsi:
ndarray with float (max = 1) LowResolution HSI [m,m,L].
hrci:
ndarray with float (max = 1) HighResolution HSI [M,N,3].
CSF:
None, optional
ndarray with camera sensitivity functions
If None: use Nikon D700
wl:
[380,780,1], optional
Wavelength range and spacing or ndarray with wavelengths of HSI image.
interp_type:
‘nd’, optional
Options:
- ‘nd’: perform n-dimensional linear interpolation using Delaunay triangulation.
- ‘nearest’: perform nearest neighbour interpolation.
k_neighbours:
4 or int, optional
Number of nearest neighbours for reflectance spectrum interpolation.
Neighbours are found using scipy.spatial.cKDTree
verbosity:
0, optional
Verbosity level for sub-call to render_image().
If > 0: make a plot of the color coordinates of original and
rendered image pixels.
csf_based_rgb_rounding:
_ROUNDING, optional
Int representing the number of decimals to round the RGB values (obtained from not-None CSF input) to before applying the search algorithm.
Smaller values increase the search speed, but could cause fatal error that causes python kernel to die. If this happens increase the rounding int value.
Returns:
hrhsi:
ndarray with HighResolution HSI [M,N,L].
Procedure:
Call render_image(hrci, rfl = lrhsi_2, CSF = …) to estimate a hyperspectral image
from the high-resolution color image hrci with the reflectance spectra
in the low-resolution hyper-spectral image as database for the estimation.
Estimation is done in raw RGB space with the lrhsi converted using the
camera sensitivity functions in CSF.
luxpy.toolboxes.hypspcim.hsi_to_rgb(hsi, spd=None, cieobs='1931_2', srgb=False, linear_rgb=False, CSF=None, normalize_to_white=True, wl=[380, 780, 1])[source]

Convert HyperSpectral Image to rgb.

Args:
hsi:
ndarray with hyperspectral image [M,N,L]
spd:
None, optional
ndarray with illumination spectrum
cieobs:
_CIEOBS, optional
CMF set to convert spectral data to xyz tristimulus values.
srgb:
False, optional
If False: Use xyz_to_srgb(spd_to_xyz(…)) to convert to srgb values
If True: use camera sensitivity functions.
linear_rgb:
False, optional
If False: use gamma = 2.4 in xyz_to_srgb, if False: use gamma = 1 and set :use_linear_part: to False.
CSF:
None, optional
ndarray with camera sensitivity functions
If None: use Nikon D700
normalize_to_white:
True, optional
If True & CSF is not None: white-balance output rgb to a perfect white diffuser.
wl:
[380,780,1], optional
Wavelength range and spacing or ndarray with wavelengths of HSI image.
Returns:
rgb:
ndarray with rgb image [M,N,3]
luxpy.toolboxes.hypspcim.rfl_to_rgb(rfl, spd=None, CSF=None, wl=None, normalize_to_white=True)[source]

Convert spectral reflectance functions (illuminated by spd) to Camera Sensitivity Functions.

Args:
rfl:
ndarray with spectral reflectance functions (1st row is wavelengths if wl is None).
spd:
None, optional
ndarray with illumination spectrum
CSF:
None, optional
ndarray with camera sensitivity functions
If None: use Nikon D700
normalize_to_white:
True, optional
If True: white-balance output rgb to a perfect white diffuser.
Returns:
rgb:
ndarray with rgb values for each spectral reflectance functions

dispcal/

py:
  • __init__.py

  • displaycalibration.py

namespace:

luxpy.dispcal

Module for display characterization

_PATH_DATA:

path to package data folder

_RGB:

set of RGB values that work quite well for display characterization

_XYZ:

example set of measured XYZ values corresponding to the RGB values in _RGB

find_index_in_rgb():

Find the index/indices of a specific r,g,b combination k in the ndarray rgb.

find_pure_rgb():

Find the indices of all pure r,g,b (single channel on) in the ndarray rgb.

correct_for_black:

Correct xyz for black level (flare)

TR_ggo(),TRi_ggo():

Forward (rgblin-to-xyz) and inverse (xyz-to-rgblin) GGO Tone Response models.

TR_gog(),TRi_gog():

Forward (rgblin-to-xyz) and inverse (xyz-to-rgblin) GOG Tone Response models.

TR_gogo(),TRi_gogo():

Forward (rgblin-to-xyz) and inverse (xyz-to-rgblin) GOGO Tone Response models.

TR_sigmoid(),TRi_sigmoid():

Forward (rgblin-to-xyz) and inverse (xyz-to-rgblin) SIGMOID Tone Response models.

estimate_tr():

Estimate Tone Response curves.

optimize_3x3_transfer_matrix():

Optimize the 3x3 rgb-to-xyz transfer matrix.

get_3x3_transfer_matrix_from_max_rgb():

Get the rgb-to-xyz transfer matrix from the maximum R,G,B single channel outputs

calibrate():

Calculate TR parameters/lut and conversion matrices

calibration_performance():

Check calibration performance (cfr. individual and average color differences for each stimulus).

rgb_to_xyz():

Convert input rgb to xyz

xyz_to_rgb():

Convert input xyz to rgb

DisplayCalibration():

Calculate TR parameters/lut and conversion matrices and store in object.

generate_training_data():

Generate RGB training pairs by creating a cube of RGB values.

generate_test_data():

Generate XYZ test values by creating a cube of CIELAB L*a*b* values, then converting these to XYZ values.

plot_rgb_xyz_lab_of_set():

Make 3d-plots of the RGB, XYZ and L*a*b* cubes of the data in rgb_xyz_lab.

split_ramps_from_cube():

Split a cube data set in pure RGB (ramps) and non-pure (remainder of cube).

is_random_sampling_of_pure_rgbs():

Return boolean indicating if the RGB cube axes (=single channel ramps) are sampled (different increment) independently from the remainder of the cube.

ramp_data_to_cube_data():

Create a RGB and XYZ cube from the single channel ramps in the training data.

GGO_GOG_GOGO_PLI:

Class for characterization models that combine a 3x3 transfer matrix and a GGO, GOG, GOGO, SIGMOID, PLI and 1-D LUT Tone response curve | - Tone Response curve models: | * GGO: gain-gamma-offset model: y = gain*x**gamma + offset | * GOG: gain-offset-gamma model: y = (gain*x + offset)**gamma | * GOG: gain-offset-gamma-offset model: y = (gain*x + offset)**gamma + offset | * SIGMOID: sigmoid (S-shaped) model: y = offset + gain* [1 / (1 + q*exp(-a/gamma*(x - m)))]**(gamma) | * PLI: Piece-wise Linear Interpolation | * LUT: 1-D Look-Up-Tables for the TR | - RGB-to-XYZ / XYZ-to-RGB transfer matrices: | * M fixed: derived from tristimulus values of maximum single channel output | * M optimized: by minimizing the RMSE between measured and predicted XYZ values

MLPR:

Class for Multi-Layer Perceptron Regressor based model.

POR:

Class for POlynomial Regression based model.

LUTNNLI:

Class for LUT-Nearest-Neighbour-distance-weighted-Linear-Interpolation based models.

LUTQHLI:

Class for LUT-QHul-Linear-Interpolation based models (cfr. scipt.interpolate.LinearNDInterpolator)

luxpy.toolboxes.dispcal._parse_rgbxyz_input(rgb, xyz=None, sep=',', header=None)[source]

Parse the rgb and xyz inputs

luxpy.toolboxes.dispcal.find_index_in_rgb(rgb, k=[255, 255, 255], as_bool=False)[source]

Find the index/indices of a specific r,g,b combination k in the ndarray rgb. (return a boolean array indicating the positions if as_bool == True)

luxpy.toolboxes.dispcal._plot_target_vs_predicted_lab(labtarget, labpredicted, cspace='lab', verbosity=1)[source]

Make a plot of target vs predicted color coordinates

luxpy.toolboxes.dispcal._plot_DEs_vs_digital_values(DEslab, DEsl, DEsab, rgbcal, avg=<function <lambda>>, nbit=8, verbosity=1)[source]

Make a plot of the lab, l and ab color differences for the different calibration stimulus types.

luxpy.toolboxes.dispcal.calibrate(rgbcal, xyzcal, black_correct=True, tr_L_type='lms', tr_type='lut', tr_par_lower_bounds=(0, -0.1, 0, -0.1), cieobs='1931_2', nbit=8, cspace='lab', avg=<function <lambda>>, tr_ensure_increasing_lut_at_low_rgb=0.2, tr_force_increasing_lut_at_high_rgb=True, tr_rms_break_threshold=0.01, tr_smooth_window_factor=None, verbosity=1, sep=', ', header=None, optimize_M=True)[source]

Calculate TR parameters/lut and conversion matrices.

Args:
rgbcal:
ndarray [Nx3] or string with filename of RGB values
rgcal must contain at least the following type of settings:
- pure R,G,B: e.g. for pure R: (R != 0) & (G==0) & (B == 0)
- white(s): R = G = B = 2**nbit-1
- gray(s): R = G = B
- black(s): R = G = B = 0
- binary colors: cyan (G = B, R = 0), yellow (G = R, B = 0), magenta (R = B, G = 0)
xyzcal:
ndarray [Nx3] or string with filename of measured XYZ values for
the RGB settings in rgbcal.
black_correct:
True, optional
If True: correct xyz for black -> xyz - xyz_black
tr_L_type:
‘lms’, optional
Type of response to use in the derivation of the Tone-Response curves.
options:
- ‘lms’: use cone fundamental responses: L vs R, M vs G and S vs B
(reduces noise and generally leads to more accurate characterization)
- ‘Y’: use the luminance signal: Y vs R, Y vs G, Y vs B
tr_type:
‘lut’, optional
options:
- ‘lut’: Derive/specify Tone-Response as a look-up-table
- ‘ggo’: Derive/specify Tone-Response as a gain-gamma-offset function: y = gain*x**gamma + offset
- ‘gog’: Derive/specify Tone-Response as a gain-offset-gamma function: y = (gain*x + offset)**gamma
- ‘gogo’: Derive/specify Tone-Response as a gain-offset-gamma-offset function: y = (gain*x + offset)**gamma + offset
- ‘sigmoid’: Derive/specify Tone-Response as a sigmoid function: y = offset + gain * [1 / (1 + q*exp(-(a/gamma)*(x - m)))]**(gamma)
- ‘pli’: Derive/specify Tone-Response as a piecewise linear interpolation function
tr_par_lower_bounds:
(0,-0.1,0,-0.1), optional
Lower bounds used when optimizing the parameters of the GGO, GOG, GOGO tone
response functions. Try different set of fit fails.
Tip for GOG & GOGO: try changing -0.1 to 0 (0 is not default,
because in most cases this leads to a less goog fit)
cieobs:
‘1931_2’, optional
CIE CMF set used to determine the XYZ tristimulus values
(needed when tr_L_type == ‘lms’: determines the conversion matrix to
convert xyz to lms values)
nbit:
8, optional
RGB values in nbit format (e.g. 8, 16, …)
cspace:
color space or chromaticity diagram to calculate color differences in
when optimizing the xyz_to_rgb and rgb_to_xyz conversion matrices.
avg:
lambda x: ((x**2).mean()**0.5), optional
Function used to average the color differences of the individual RGB settings
in the optimization of the xyz_to_rgb and rgb_to_xyz conversion matrices.
tr_ensure_increasing_lut_at_low_rgb:
0.2 or float (max = 1.0) or None, optional
Ensure an increasing lut by setting all values below the RGB with the maximum
zero-crossing of np.diff(lut) and RGB/RGB.max() values of :tr_ensure_increasing_lut_at_low_rgb:
(values of 0.2 are a good rule of thumb value)
Non-strictly increasing lut values can be caused at low RGB values due
to noise and low measurement signal.
If None: don’t force lut, but keep as is.
tr_force_increasing_lut_at_high_rgb:
True, optional
If True: ensure the tone response curves in the lut are monotonically increasing.
by finding the first 1.0 value and setting all values after that also to 1.0.
tr_rms_break_threshold:
0.01, optional
Threshold for breaking a loop that tries different bounds
for the gain in the TR optimization for the GGO, GOG, GOGO models.
(for some input the curve_fit fails, but succeeds on using different bounds)
tr_smooth_window_factor:
None, optional
Determines window size for smoothing of data using scipy’s savgol_filter prior to determining the TR curves.
window_size = x.shape[0]//tr_smooth_window_factor
If None: don’t apply any smoothing
verbosity:
1, optional
> 0: print and plot optimization results
sep:
‘,’, optional
separator in files with rgbcal and xyzcal data
header:
None, optional
header specifier for files with rgbcal and xyzcal data
(see pandas.read_csv)
optimize_M:
True, optional
If True: optimize transfer matrix M
Else: use column matrix of tristimulus values of R,G,B channels at max.
Returns:
M:
linear rgb to xyz conversion matrix
N:
xyz to linear rgb conversion matrix
tr:
Tone Response function parameters or lut or piecewise linear interpolation functions (forward and backward)
xyz_black:
ndarray with XYZ tristimulus values of black
xyz_white:
ndarray with tristimlus values of white
luxpy.toolboxes.dispcal.calibration_performance(rgb, xyztarget, M, N, tr, xyz_black, xyz_white, tr_type='lut', cspace='lab', avg=<function <lambda>>, rgb_is_xyz=False, is_verification_data=False, nbit=8, verbosity=1, sep=', ', header=None)[source]

Check calibration performance. Calculate DE for each stimulus.

Args:
rgb:
ndarray [Nx3] or string with filename of RGB values
(or xyz values if argument rgb_to_xyz == True!)
xyztarget:
ndarray [Nx3] or string with filename of target XYZ values corresponding
to the RGB settings (or the measured XYZ values, if argument rgb_to_xyz == True).
M:
linear rgb to xyz conversion matrix
N:
xyz to linear rgb conversion matrix
tr:
Tone Response function represented by GGO, GOG, GOGO, LUT or PLI (piecewise linear function) models
xyz_black:
ndarray with XYZ tristimulus values of black
xyz_white:
ndarray with tristimlus values of white
tr_type:
‘lut’, optional
Type of Tone Response in tr input argument
options:
- ‘lut’: Derive/specify Tone-Response as a look-up-table
- ‘ggo’: Derive/specify Tone-Response as a gain-gamma-offset function: y = gain*x**gamma + offset
- ‘gog’: Derive/specify Tone-Response as a gain-offset-gamma function: y = (gain*x + offset)**gamma
- ‘gogo’: Derive/specify Tone-Response as a gain-offset-gamma-offset function: y = (gain*x + offset)**gamma + offset
- ‘sigmoid’: Derive/specify Tone-Response as a sigmoid function: y = offset + gain * [1 / (1 + q*exp(-(a/gamma)*(x - m)))]**(gamma)
- ‘pli’: Derive/specify Tone-Response as a piecewise linear interpolation function
cspace:
color space or chromaticity diagram to calculate color differences in.
avg:
lambda x: ((x**2).mean()**0.5), optional
Function used to average the color differences of the individual RGB settings
in the optimization of the xyz_to_rgb and rgb_to_xyz conversion matrices.
rgb_is_xyz:
False, optional
If True: the data in argument rgb are actually measured XYZ tristimulus values
and are directly compared to the target xyz.
is_verification_data:
False, optional
If False: the data is assumed to be corresponding to RGB value settings used
in the calibration (i.e. containing whites, blacks, grays, pure and binary mixtures)
If True: no assumptions on content of rgb, so use this settings when
checking the performance for a set of measured and target xyz data
different than the ones used in the actual calibration measurements.
nbit:
8, optional
RGB values in nbit format (e.g. 8, 16, …)
verbosity:
1, optional
> 0: print and plot optimization results
sep:
‘,’, optional
separator in files with rgbcal and xyzcal data
header:
None, optional
header specifier for files with rgbcal and xyzcal data
(see pandas.read_csv)
Returns:
M:
linear rgb to xyz conversion matrix
N:
xyz to linear rgb conversion matrix
tr:
Tone Response function parameters or lut or piecewise linear interpolation functions (forward and backward)
xyz_black:
ndarray with XYZ tristimulus values of black
xyz_white:
ndarray with tristimlus values of white
luxpy.toolboxes.dispcal.rgb_to_xyz(rgb, M, tr, xyz_black, tr_type='lut', nbit=8)[source]

Convert input rgb to xyz.

Args:
rgb:
ndarray [Nx3] with RGB values
M:
linear rgb to xyz conversion matrix
tr:
Tone Response function represented by GGO, GOG, GOGO, LUT or PLI (piecewise linear function) models
xyz_black:
ndarray with XYZ tristimulus values of black
tr_type:
‘lut’, optional
Type of Tone Response in tr input argument
options:
- ‘lut’: Derive/specify Tone-Response as a look-up-table
- ‘ggo’: Derive/specify Tone-Response as a gain-gamma-offset function: y = gain*x**gamma + offset
- ‘gog’: Derive/specify Tone-Response as a gain-offset-gamma function: y = (gain*x + offset)**gamma
- ‘gogo’: Derive/specify Tone-Response as a gain-offset-gamma-offset function: y = (gain*x + offset)**gamma + offset
- ‘sigmoid’: Derive/specify Tone-Response as a sigmoid function: y = offset + gain * [1 / (1 + q*exp(-(a/gamma)*(x - m)))]**(gamma)
- ‘pli’: Derive/specify Tone-Response as a piecewise linear interpolation function
nbit:
8, optional
RGB values in nbit format (e.g. 8, 16, …)
Returns:
xyz:
ndarray [Nx3] of XYZ tristimulus values
luxpy.toolboxes.dispcal.xyz_to_rgb(xyz, N, tr, xyz_black, tr_type='lut', nbit=8)[source]

Convert xyz to input rgb.

Args:
xyz:
ndarray [Nx3] with XYZ tristimulus values
N:
xyz to linear rgb conversion matrix
tr:
Tone Response function represented by GGO, GOG, GOGO, LUT or PLI (piecewise linear function) models
xyz_black:
ndarray with XYZ tristimulus values of black
tr_type:
‘lut’, optional
Type of Tone Response in tr input argument
options:
- ‘lut’: Derive/specify Tone-Response as a look-up-table
- ‘ggo’: Derive/specify Tone-Response as a gain-gamma-offset function: y = gain*x**gamma + offset
- ‘gog’: Derive/specify Tone-Response as a gain-offset-gamma function: y = (gain*x + offset)**gamma
- ‘gogo’: Derive/specify Tone-Response as a gain-offset-gamma-offset function: y = (gain*x + offset)**gamma + offset
- ‘sigmoid’: Derive/specify Tone-Response as a sigmoid function: y = offset + gain * [1 / (1 + q*exp(-(a/gamma)*(x - m)))]**(gamma)
- ‘pli’: Derive/specify Tone-Response as a piecewise linear interpolation function
nbit:
8, optional
RGB values in nbit format (e.g. 8, 16, …)
Returns:
rgb:
ndarray [Nx3] of display RGB values
class luxpy.toolboxes.dispcal.DisplayCalibration(rgbcal, xyzcal=None, tr_L_type='lms', cieobs='1931_2', tr_type='lut', nbit=8, cspace='lab', avg=<function DisplayCalibration.<lambda>>, tr_ensure_increasing_lut_at_low_rgb=0.2, tr_force_increasing_lut_at_high_rgb=True, tr_rms_break_threshold=0.01, tr_smooth_window_factor=None, verbosity=1, sep=', ', header=None, optimize_M=True)[source]

Class for display_calibration.

Args:
rgbcal:
ndarray [Nx3] or string with filename of RGB values
rgcal must contain at least the following type of settings:
- pure R,G,B: e.g. for pure R: (R != 0) & (G==0) & (B == 0)
- white(s): R = G = B = 2**nbit-1
- gray(s): R = G = B
- black(s): R = G = B = 0
- binary colors: cyan (G = B, R = 0), yellow (G = R, B = 0), magenta (R = B, G = 0)
xyzcal:
None, optional
ndarray [Nx3] or string with filename of measured XYZ values for
the RGB settings in rgbcal.
if None: rgbcal is [Nx6] ndarray containing rgb (columns 0-2) and xyz data (columns 3-5)
tr_L_type:
‘lms’, optional
Type of response to use in the derivation of the Tone-Response curves.
options:
- ‘lms’: use cone fundamental responses: L vs R, M vs G and S vs B
(reduces noise and generally leads to more accurate characterization)
- ‘Y’: use the luminance signal: Y vs R, Y vs G, Y vs B
tr_type:
‘lut’, optional
options:
- ‘lut’: Derive/specify Tone-Response as a look-up-table
- ‘ggo’: Derive/specify Tone-Response as a gain-gamma-offset function: y = gain*x**gamma + offset
- ‘gog’: Derive/specify Tone-Response as a gain-offset-gamma function: y = (gain*x + offset)**gamma
- ‘gogo’: Derive/specify Tone-Response as a gain-offset-gamma-offset function: y = (gain*x + offset)**gamma + offset
- ‘sigmoid’: Derive/specify Tone-Response as a sigmoid function: y = offset + gain * [1 / (1 + q*exp(-(a/gamma)*(x - m)))]**(gamma)
- ‘pli’: Derive/specify Tone-Response as a piecewise linear interpolation function
cieobs:
‘1931_2’, optional
CIE CMF set used to determine the XYZ tristimulus values
(needed when tr_L_type == ‘lms’: determines the conversion matrix to
convert xyz to lms values)
nbit:
8, optional
RGB values in nbit format (e.g. 8, 16, …)
cspace:
color space or chromaticity diagram to calculate color differences in
when optimizing the xyz_to_rgb and rgb_to_xyz conversion matrices.
avg:
lambda x: ((x**2).mean()**0.5), optional
Function used to average the color differences of the individual RGB settings
in the optimization of the xyz_to_rgb and rgb_to_xyz conversion matrices.
tr_ensure_increasing_lut_at_low_rgb:
0.2 or float (max = 1.0) or None, optional
Ensure an increasing lut by setting all values below the RGB with the maximum
zero-crossing of np.diff(lut) and RGB/RGB.max() values of :tr_ensure_increasing_lut_at_low_rgb:
(values of 0.2 are a good rule of thumb value)
Non-strictly increasing lut values can be caused at low RGB values due
to noise and low measurement signal.
If None: don’t force lut, but keep as is.
tr_force_increasing_lut_at_high_rgb:
True, optional
If True: ensure the tone response curves in the lut are monotonically increasing.
by finding the first 1.0 value and setting all values after that also to 1.0.
tr_rms_break_threshold:
0.01, optional
Threshold for breaking a loop that tries different bounds
for the gain in the TR optimization for the GGO, GOG, GOGO models.
(for some input the curve_fit fails, but succeeds on using different bounds)
tr_smooth_window_factor:
None, optional
Determines window size for smoothing of data using scipy’s savgol_filter prior to determining the TR curves.
window_size = x.shape[0]//tr_smooth_window_factor
If None: don’t apply any smoothing
verbosity:
1, optional
> 0: print and plot optimization results
sep:
‘,’, optional
separator in files with rgbcal and xyzcal data
header:
None, optional
header specifier for files with rgbcal and xyzcal data
(see pandas.read_csv)
optimize_M:
True, optional
If True: optimize transfer matrix M
Else: use column matrix of tristimulus values of R,G,B channels at max.
Return:
calobject:
attributes are:
- M: linear rgb to xyz conversion matrix
- N: xyz to linear rgb conversion matrix
- TR: Tone Response function parameters for GGO, GOG, GOGO models or lut or piecewise linear interpolation functions (forward and backward)
- xyz_black: ndarray with XYZ tristimulus values of black
- xyz_white: ndarray with tristimlus values of white
as well as:
- rgbcal, xyzcal, cieobs, avg, tr_type, nbit, cspace, verbosity
- performance: dictionary with various color differences set to np.nan
- (run calobject.performance() to fill it with actual values)
check_performance(rgb=None, xyz=None, verbosity=None, sep=',', header=None, rgb_is_xyz=False, is_verification_data=True)[source]

Check calibration performance (if rgbcal is None: use calibration data).

Args:
rgb:
None, optional
ndarray [Nx3] or string with filename of RGB values
(or xyz values if argument rgb_to_xyz == True!)
If None: use self.rgbcal
xyz:
None, optional
ndarray [Nx3] or string with filename of target XYZ values corresponding
to the RGB settings (or the measured XYZ values, if argument rgb_to_xyz == True).
If None: use self.xyzcal
verbosity:
None, optional
if None: use self.verbosity
if > 0: print and plot optimization results
sep:
‘,’, optional
separator in files with rgb and xyz data
header:
None, optional
header specifier for files with rgb and xyz data
(see pandas.read_csv)
rgb_is_xyz:
False, optional
If True: the data in argument rgb are actually measured XYZ tristimulus values
and are directly compared to the target xyz.
is_verification_data:
False, optional
If False: the data is assumed to be corresponding to RGB value settings used
in the calibration (i.e. containing whites, blacks, grays, pure and binary mixtures)
Performance results are stored in self.performance.
If True: no assumptions on content of rgb, so use this settings when
checking the performance for a set of measured and target xyz data
different than the ones used in the actual calibration measurements.
Return:
performance:
dictionary with various color differences.
to_xyz(rgb)[source]

Convert display rgb to xyz.

to_rgb(xyz)[source]

Convert xyz to display rgb.

luxpy.toolboxes.dispcal.TR_ggo(x, *p)[source]

Forward GGO tone response model (x = rgb; p = [gain,offset,gamma]).

Notes:
  1. GGO model: y = gain*x**gamma + offset

luxpy.toolboxes.dispcal.TRi_ggo(x, *p)[source]

Inverse GGO tone response model (x = xyz; p = [gain,offset,gamma]).

Notes:
  1. GGO model: y = gain*x**gamma + offset

luxpy.toolboxes.dispcal.TR_gog(x, *p)[source]

Forward GOG tone response model (x = rgb; p = [gain,offset,gamma]).

Notes:
  1. GOG model: y = (gain*x + offset)**gamma

luxpy.toolboxes.dispcal.TRi_gog(x, *p)[source]

Inverse GOG tone response model (x = xyz; p = [gain,offset,gamma]).

Notes:
  1. GOG model: y = (gain*x + offset)**gamma

luxpy.toolboxes.dispcal.TR_gogo(x, *p)[source]

Forward GOGO tone response model (x = rgb; p = [gain,offset,gamma,offset2]).

Notes:
  1. GOGO model: y = (gain*x + offset)**gamma + offset2

luxpy.toolboxes.dispcal.TRi_gogo(x, *p)[source]

Inverse GOGO tone response model (x = xyz; p = [gain,offset,gamma,offset2]).

Notes:
  1. GOGO model: y = (gain*x + offset)**gamma + offset2

luxpy.toolboxes.dispcal.TR_sigmoid(x, *p)[source]

Forward SIGMOID tone response model (x = rgb; p = [gain, offset, gamma, m, a, q]).

Notes:
  1. SIGMOID model: y = offset + gain * [1 / (1 + q*exp(-a/gamma*(x - m)))]**(gamma)]

luxpy.toolboxes.dispcal.TRi_sigmoid(x, *p)[source]

Inverse SIGMOID tone response model (x = xyz; p = [gain, offset, gamma, m, a, q]).

Notes:
  1. SIGMOID model: y = offset + gain * [1 / (1 + q*exp(-a/gamma*(x - m)))]**(gamma)]

luxpy.toolboxes.dispcal.correct_for_black(xyz, rgb, xyz_black=None)[source]

Correct xyz for black level (flare)

luxpy.toolboxes.dispcal._rgb_linearizer(rgb, tr, tr_type='lut', nbit=8)[source]

Linearize rgb using tr tone response function represented by a GGO, GOG, GOGO, LUT or PLI (cfr. piecewise linear interpolator) model

luxpy.toolboxes.dispcal._rgb_delinearizer(rgblin, tr, tr_type='lut', nbit=8)[source]

De-linearize linear rgblin using tr tone response function represented by GGO, GOG, GOGO, LUT or PLI (cfr. piecewise linear interpolator) model

luxpy.toolboxes.dispcal.estimate_tr(rgb, xyz, black_correct=True, xyz_black=None, tr_L_type='lms', tr_type='lut', tr_par_lower_bounds=(0, -0.1, 0, -0.1), cieobs='1931_2', nbit=8, tr_ensure_increasing_lut_at_low_rgb=0.2, tr_force_increasing_lut_at_high_rgb=True, verbosity=1, tr_rms_break_threshold=0.01, tr_smooth_window_factor=None)[source]

Estimate tone response functions.

Args:
rgb:
ndarray [Nx3] of RGB values
rgcal must contain at least the following type of settings:
- pure R,G,B: e.g. for pure R: (R != 0) & (G==0) & (B == 0)
- white(s): R = G = B = 2**nbit-1
- black(s): R = G = B = 0
xyz:
ndarray [Nx3] of measured XYZ values for the RGB settings in rgb.
black_correct:
True, optional
If True: correct xyz for black -> xyz - xyz_black
xyz_black:
None or ndarray, optional
If None: determine xyz_black from input data (must contain rgb = [0,0,0]!)
tr_L_type:
‘lms’, optional
Type of response to use in the derivation of the Tone-Response curves.
options:
- ‘lms’: use cone fundamental responses: L vs R, M vs G and S vs B
(reduces noise and generally leads to more accurate characterization)
- ‘Y’: use the luminance signal: Y vs R, Y vs G, Y vs B
tr_type:
‘lut’, optional
options:
- ‘lut’: Derive/specify Tone-Response as a look-up-table
- ‘ggo’: Derive/specify Tone-Response as a gain-gamma-offset function: y = gain*x**gamma + offset
- ‘gog’: Derive/specify Tone-Response as a gain-offset-gamma function: y = (gain*x + offset)**gamma
- ‘gogo’: Derive/specify Tone-Response as a gain-offset-gamma-offset function: y = (gain*x + offset)**gamma + offset
- ‘sigmoid’: Derive/specify Tone-Response as a sigmoid function: y = offset + gain * [1 / (1 + q*exp(-(a/gamma)*(x - m)))]**(gamma)
- ‘pli’: Derive/specify Tone-Response as a piecewise linear interpolation function
tr_par_lower_bounds:
(0,-0.1,0,-0.1), optional
Lower bounds used when optimizing the parameters of the GGO, GOG, GOGO tone
response functions. Try different set of fit fails.
Tip for GOG & GOGO: try changing -0.1 to 0 (0 is not default,
because in most cases this leads to a less goog fit)
cieobs:
‘1931_2’, optional
CIE CMF set used to determine the XYZ tristimulus values
(needed when tr_L_type == ‘lms’: determines the conversion matrix to
convert xyz to lms values)
nbit:
8, optional
RGB values in nbit format (e.g. 8, 16, …)
tr_ensure_increasing_lut_at_low_rgb:
0.2 or float (max = 1.0) or None, optional
Ensure an increasing lut by setting all values below the RGB with the maximum
zero-crossing of np.diff(lut) and RGB/RGB.max() values of :tr_ensure_increasing_lut_at_low_rgb:
(values of 0.2 are a good rule of thumb value)
Non-strictly increasing lut values can be caused at low RGB values due
to noise and low measurement signal.
If None: don’t force lut, but keep as is.
tr_force_increasing_lut_at_high_rgb:
True, optional
If True: ensure the tone response curves in the lut are monotonically increasing.
by finding the first 1.0 value and setting all values after that also to 1.0.
verbosity:
1, optional
> 0: print and plot optimization results
tr_rms_break_threshold:
0.01, optional
Threshold for breaking a loop that tries different bounds
for the gain in the TR optimization for the GGO, GOG, GOGO models.
(for some input the curve_fit fails, but succeeds on using different bounds)
tr_smooth_window_factor:
None, optional
Determines window size for smoothing of data using scipy’s savgol_filter prior to determining the TR curves.
window_size = x.shape[0]//tr_smooth_window_factor
If None: don’t apply any smoothing
Returns:
tr:
Tone Response function parameters or lut or piecewise linear interpolation functions (forward and backward)
xyz_black:
ndarray with XYZ tristimulus values of black
p_pure:
ndarray with positions in xyz and rgb that contain data corresponding to the black level (rgb = [0,0,0]).
luxpy.toolboxes.dispcal.optimize_3x3_transfer_matrix(xyz, rgb, black_correct=True, xyz_black=None, rgblin=None, nbit=8, cspace='lab', avg=<function <lambda>>, tr=None, tr_type=None, verbosity=0)[source]

Optimize the 3x3 rgb-to-xyz transfer matrix

Args:
xyz:
ndarray with measured XYZ tristimulus values (not correct for the black-level)
rgb:
device RGB values.
black_correct:
True, optional
If True: correct xyz for black -> xyz - xyz_black
xyz_black:
None or ndarray, optional
If None: determine xyz_black from input data (must contain rgb = [0,0,0]!)
nbit:
8, optional
RGB values in nbit format (e.g. 8, 16, …)
cspace:
color space or chromaticity diagram to calculate color differences in
when optimizing the xyz_to_rgb and rgb_to_xyz conversion matrices.
avg:
lambda x: ((x**2).mean()**0.5), optional
Function used to average the color differences of the individual RGB settings
in the optimization of the xyz_to_rgb and rgb_to_xyz conversion matrices.
tr:
None, optional
Tone Response function parameters or lut or piecewise linear interpolation functions (forward and backward)
If None -> :rgblin: must be provided !
tr_type:
‘lut’, optional
options:
- ‘lut’: Derive/specify Tone-Response as a look-up-table
- ‘ggo’: Derive/specify Tone-Response as a gain-gamma-offset function: y = gain*x**gamma + offset
- ‘gog’: Derive/specify Tone-Response as a gain-offset-gamma function: y = (gain*x + offset)**gamma
- ‘gogo’: Derive/specify Tone-Response as a gain-offset-gamma-offset function: y = (gain*x + offset)**gamma + offset
- ‘sigmoid’: Derive/specify Tone-Response as a sigmoid function: y = offset + gain * [1 / (1 + q*exp(-(a/gamma)*(x - m)))]**(gamma)
- ‘pli’: Derive/specify Tone-Response as a piecewise linear interpolation function
verbosity:
1, optional
> 0: print and plot optimization results
Returns:
M:
linear rgb-to-xyz conversion matrix
luxpy.toolboxes.dispcal.get_3x3_transfer_matrix_from_max_rgb(xyz, rgb, black_correct=True, xyz_black=None)[source]

Get the rgb-to-xyz transfer matrix from the maximum R,G,B single channel outputs

Args:
xyz:
ndarray with measured XYZ tristimulus values (not correct for the black-level)
rgb:
device RGB values.
black_correct:
True, optional
If True: correct xyz for black -> xyz - xyz_black
xyz_black:
None or ndarray, optional
If None: determine xyz_black from input data (must contain rgb = [0,0,0]!)
Returns:
M:
linear rgb-to-xyz conversion matrix
luxpy.toolboxes.dispcal.generate_training_data(inc=[10], inc_offset=0, nbit=8, seed=0, randomize_order=True, verbosity=0, fig=None)[source]

Generate RGB training pairs by creating a cube of RGB values.

Args:
inc:
[10], optional
Increment along each channel (=R,G,B) axes in the RGB cube.
If inc is a list with 2 different values the RGB cube axes
are sampled independently from the remainder of the cube.
–> inc = [inc_remainder, inc_axes]
inc_offset:
0, optional
The offset along each channel axes from which to start incrementing.
nbit:
8, optional
RGB values in nbit format (e.g. 8, 16, …)
include_max:
True, optional
If True: ensure all combinations of max value (e.g. 255 for nbit = 8) are included in RGB cube.
include_min:
True, optional
If True: ensure all combinations of min value 0 are included in RGB cube.
seed:
0, optional
Seed for setting the state of numpy’s random number generator.
randomize_order:
True, optional
Randomize the order of the (xyz,rgb) pairs before output.
verbosity:
0, optional
Level of output.
Returns:
rgb:
ndarray with RGB values.
luxpy.toolboxes.dispcal.generate_test_data(dlab=[10, 10, 10], nbit=8, seed=0, xyzw=None, cieobs='1931_2', xyzrgb_hull=None, randomize_order=True, verbosity=0, fig=None)[source]

Generate XYZ test values by creating a cube of CIELAB L*a*b* values, then converting these to XYZ values.

Args:
dlab:
[10,10,10], optional
Increment along each CIELAB (=L*,a*,b*) axes in the Lab cube.
nbit:
8, optional
RGB values in nbit format (e.g. 8, 16, …)
seed:
0, optional
Seed for setting the state of numpy’s random number generator.
xyzw:
None, optional
White point xyz to convert from lab to xyz
If None: use the white in xyzrgb_hull. If this is also None: use _CIE_D65 white.
cieobs:
_CIEOBS, optional
CIE standard observer used to convert _CIE_D65 to XYZ when xyzw
needs to be determined from the illuminant spectrum.
xyzrgb_hull:
None, optional
ndarray with (XYZ,RGB) pairs from which the hull (= display gamut) can be determined.
If None: test XYZ might fall outside of display gamut !
randomize_order:
True, optional
Randomize the order of the test xyz before output.
verbosity:
0, optional
Level of output.
Returns:
xyz:
ndarray with XYZ values.
luxpy.toolboxes.dispcal.split_ramps_from_cube(rgb, xyz=None, rgb_only=False)[source]

Split a cube data set in pure RGB (ramps) and non-pure (remainder of cube).

luxpy.toolboxes.dispcal.is_random_sampling_of_pure_rgbs(inc)[source]

Return boolean indicating if the RGB cube axes (=single channel ramps) are sampled (different increment) independently from the remainder of the cube.

Note:
  1. Independent sampling is indicated when :inc: is a list with 2 different values.

luxpy.toolboxes.dispcal.plot_rgb_xyz_lab_of_set(rgb_xyz_lab, subscript='', data_contains=['rgb', 'xyz', 'lab'], nrows=1, row=1, fig=None, axs=None, figsize=(14, 7), marker='.')[source]

Make 3d-plots of the RGB, XYZ and L*a*b* cubes of the data in rgb_xyz_lab.

Args:
rgb_xyz_lab:
ndarray with RGB, XYZ, Lab data.
subscript:
‘’, optional
subscript to add to the axis labels.
data_contains:
[‘rgb’,’xyz’,’lab’], optional
specifies what is in rgb_xyz_lab
nrows:
1, optional
Number of rows in (nx3) figure.
row:
1, optional
Current row number to plot to (when using the function to plot nx3 figures)
fig:
None, optional
Figure handle.
If None: generate new figure.
axs:
None, optional
Axes handles: (3,) or None
If None: add new axes for each of the RGB, XYZ, Lab subplots.
figsize:
(14,7), optional
Figure size.
marker:
‘.’, optional
Marker symbol used for plotting.
Return:
fig, axs:
Handles to the figure and the three axes in that figure.
luxpy.toolboxes.dispcal.ramp_data_to_cube_data(training_data, black_correct=True, nbit=8)[source]

Create a RGB and XYZ cube from the single channel ramps in the training data.

Args:
training_data:
tuple (xyz_train, rgb_train) of ndarrays
black_correct:
True, optional
If True: apply black correction before creating the cubes
If False: the black level will be added 3 times as the XYZ of the R, G, B channels are summed)
class luxpy.toolboxes.dispcal.GGO_GOG_GOGO_PLI(training_data=None, single_channel_ramp_only_data=False, cspace='lab', nbit=8, xyzw=None, xyzb=None, black_correct=True, tr=None, tr_type=None, tr_L_type='Y', tr_par_lower_bounds=(0, -0.1, 0, -0.1), M=None, optimize_M=True, N=None, cieobs='1931_2', avg=<function GGO_GOG_GOGO_PLI.<lambda>>, tr_ensure_increasing_lut_at_low_rgb=0.2, tr_force_increasing_lut_at_high_rgb=True, tr_rms_break_threshold=0.01, tr_smooth_window_factor=None)[source]
train(training_data=None, single_channel_ramp_only_data=None, EPS=1e-300)[source]
to_rgb(xyz)[source]
to_xyz(rgb)[source]
class luxpy.toolboxes.dispcal.MLPR(training_data=None, single_channel_ramp_only_data=False, cspace='lab', nbit=8, xyzw=None, xyzb=None, black_correct=False, linearize_rgb=False, tr_par_lower_bounds=(0, -0.1, 0, -0.1), tr_L_type='Y', tr_type='pli', cieobs='1931_2', tr_ensure_increasing_lut_at_low_rgb=0.2, tr_force_increasing_lut_at_high_rgb=True, tr_rms_break_threshold=0.01, tr_smooth_window_factor=None, mode=['bw'], use_StandardScaler=True, hidden_layer_sizes=(500,), activation='relu', max_iter=100000, tol=0.0001, learning_rate='adaptive', **kwargs)[source]
class luxpy.toolboxes.dispcal.POR(training_data=None, single_channel_ramp_only_data=False, cspace='lab', nbit=8, xyzw=None, xyzb=None, black_correct=True, linearize_rgb=True, tr_par_lower_bounds=(0, -0.1, 0, -0.1), tr_L_type='Y', tr_type='pli', cieobs='1931_2', tr_ensure_increasing_lut_at_low_rgb=0.2, tr_force_increasing_lut_at_high_rgb=True, tr_rms_break_threshold=0.01, tr_smooth_window_factor=None, mode=['bw'], polyfeat_degree=5, polyfeat_include_bias=True, polyfeat_interaction_only=False, linreg_fit_intercept=False, linreg_positive=False)[source]
class luxpy.toolboxes.dispcal.LUTNNLI(training_data=None, single_channel_ramp_only_data=False, cspace='lab', nbit=8, xyzw=None, xyzb=None, black_correct=True, linearize_rgb=True, tr_par_lower_bounds=(0, -0.1, 0, -0.1), tr_L_type='Y', tr_type='pli', cieobs='1931_2', tr_ensure_increasing_lut_at_low_rgb=0.2, tr_force_increasing_lut_at_high_rgb=True, tr_rms_break_threshold=0.01, tr_smooth_window_factor=None, mode=['bw'], number_of_nearest_neighbours=4, **kwargs)[source]
predict(x, mode, ckdtree=None, x_train=None, y_train=None)[source]
class luxpy.toolboxes.dispcal.LUTQHLI(training_data=None, single_channel_ramp_only_data=False, cspace='lab', nbit=8, xyzw=None, xyzb=None, black_correct=True, linearize_rgb=True, tr_par_lower_bounds=(0, -0.1, 0, -0.1), tr_L_type='Y', tr_type='pli', cieobs='1931_2', tr_ensure_increasing_lut_at_low_rgb=0.2, tr_force_increasing_lut_at_high_rgb=True, tr_rms_break_threshold=0.01, tr_smooth_window_factor=None, rescale=False, mode=['bw'])[source]
class luxpy.toolboxes.dispcal.VirtualDisplay(model='kwak2000_SII', seed=-1, nbit=None, channel_dependence=None, **model_pars)[source]
to_rgb(xyz, **kwargs)[source]
to_xyz(rgb, **kwargs)[source]

rgb2spec/

py:
  • __init__.py

  • smits_mitsuba.py

namespace:

luxpy.rgb2spec

Module for RGB to spectrum conversions

_BASESPEC_SMITS:

Default dict with base spectra for white, cyan, magenta, yellow, blue, green and red for each intent (‘rfl’ or ‘spd’)

rgb_to_spec_smits():

Convert an array of (linearized) RGB values to a spectrum using a smits like conversion as implemented in mitsuba (July 10, 2019)

convert():

Convert an array of (linearized) RGB values to a spectrum (wrapper around rgb_to_spec_smits(), future: implement other methods)

luxpy.toolboxes.rgb2spec.rgb_to_spec_smits(rgb, intent='rfl', linearized_rgb=True, bitdepth=8, wlr=[360.0, 830.0, 1.0], rgb2spec=None)[source]

Convert an array of (linearized) RGB values to a spectrum using a Smits like conversion as implemented in Mitsuba.

Args:
rgb:
ndarray of list of (linearized) rgb values
linearized_rgb:
True, optional
If False: RGB values will be linearized using:
rgb_lin = xyz_to_srgb(srgb_to_xyz(rgb), gamma = 1, use_linear_part = False)
If True: user has entered pre-linearized RGB values.
intent:
‘rfl’ (or ‘spd’), optional
type of requested spectrum conversion.
bitdepth:
8, optional
bit depth of rgb values
wlr:
_WL3, optional
desired wavelength (nm) range of spectrum.
rgb2spec:
None, optional
Dict with base spectra for white, cyan, magenta, yellow, blue, green and red for each intent.
If None: use _BASESPEC_SMITS.
Returns:
spec:
ndarray with spectrum or spectra (one for each rgb value, first row are the wavelengths)
luxpy.toolboxes.rgb2spec.convert(rgb, linearized_rgb=True, method='smits_mtsb', intent='rfl', bitdepth=8, wlr=[360.0, 830.0, 1.0], rgb2spec=None)[source]

Convert an array of RGB values to a spectrum.

Args:
rgb:
ndarray of list of rgb values
linearized_rgb:
True, optional
If False: RGB values will be linearized using:
rgb_lin = xyz_to_srgb(srgb_to_xyz(rgb), gamma = 1, use_linear_part = False)
If True: user has entered pre-linearized RGB values.
method:
‘smits_mtsb’, optional
Method to use for conversion:
- ‘smits_mtsb’: use a smits like conversion as implemented in mitsuba.
intent:
‘rfl’ (or ‘spd’), optional
type of requested spectrum conversion .
bitdepth:
8, optional
bit depth of rgb values
wlr:
_WL3, optional
desired wavelength (nm) range of spectrum.
rgb2spec:
None, optional
Dict with base spectra for white, cyan, magenta, yellow, blue, green and red for each intent.
If None: use _BASESPEC_SMITS.
Returns:
spec:
ndarray with spectrum or spectra (one for each rgb value, first row are the wavelengths)

iolidfiles/

py:
  • __init__.py

  • io_lid_files.py

namespace:

luxpy.iolidfiles

Module for reading and writing IES and LDT files.

read_lamp_data:

Read in light intensity distribution and other lamp data from LDT or IES files.

Notes:

1.Only basic support. Writing is not yet implemented. 2.Reading IES files is based on Blender’s ies2cycles.py 3.This was implemented to build some uv-texture maps for rendering and only tested for a few files. 4. Use at own risk. No warranties.

luxpy.toolboxes.iolidfiles.read_lamp_data(datasource, multiplier=1.0, verbosity=0, normalize='I0', only_common_keys=False)[source]

Read in light intensity distribution and other lamp data from LDT or IES files.

Args:
datasource:
Filename of LID file or StringIO object or string with LID data.
multiplier:
1.0, optional
Scaler for candela values.
verbosity:
0, optional
Display messages while reading file.
normalize:
‘I0’, optional
If ‘I0’: normalize LID to intensity at (theta,phi) = (0,0)
If ‘max’: normalize to max = 1.
If None: do not normalize.
only_common_keys:
False, optional
If True, output only common dict keys related to angles, values
and such of LID.
read_lid_lamp_data(?) for print of common keys and return
empty dict with common keys.
Returns:
lid:

dict with IES or LDT file data. | | If LIDtype == ‘ies’: | dict_keys( | [‘datasource’, ‘version’, ‘lamps_num’, ‘lumens_per_lamp’, | ‘candela_mult’, ‘v_angles_num’, ‘h_angles_num’, ‘photometric_type’, | ‘units_type’, ‘width’, ‘length’, ‘height’, ‘ballast_factor’, | ‘future_use’, ‘input_watts’, ‘v_angs’, ‘h_angs’, ‘lamp_cone_type’, | ‘lamp_h_type’, ‘candela_values’, ‘candela_2d’, ‘v_same’, ‘h_same’, | ‘intensity’, ‘theta’, ‘values’, ‘phi’, ‘map’,’Iv0’] | ) | | If LIDtype == ‘ldt’: | dict_keys( | [‘datasource’, ‘version’, ‘manufacturer’, ‘Ityp’,’Isym’, | ‘Mc’, ‘Dc’, ‘Ng’, ‘name’, Dg’, ‘cct/cri’, ‘tflux’, ‘lumens_per_lamp’, | ‘candela_mult’, ‘tilt’, lamps_num’, | ‘cangles’, ‘tangles’,’candela_values’, ‘candela_2d’, | ‘intensity’, ‘theta’, ‘values’, ‘phi’, ‘map’, ‘Iv0’] | )

Notes:
  1. if only_common_keys: output is dictionary with keys: [‘datasource’, ‘version’, ‘intensity’, ‘theta’, ‘phi’, ‘values’, ‘map’, ‘Iv0’, ‘candela_values’, ‘candela_2d’]

  2. ‘theta’,’phi’, ‘values’ (=’candela_2d’) contain the original theta angles, phi angles and normalized candelas as specified in file.

  3. ‘map’ contains a dicionary with keys ‘thetas’, ‘phis’, ‘values’. This data has been complete to full angle ranges thetas: [0,180]; phis: [0,360]

  4. LDT map completion only supported for Isymm == 4 (since 31/10/2018), and Isymm == 1 (since, 02/10/2021), Map will be filled with original ‘theta’, ‘phi’ and normalized ‘candela_2d’ values !

  5. LIDtype is checked by looking for the presence of ‘TILT=’ in datasource content (if True->’IES’ else ‘LDT’)

  6. IES files with TILT=INCLUDE or TILT=<filename> are not supported!

luxpy.toolboxes.iolidfiles.get_uv_texture(theta, phi=None, values=None, input_types=('array', 'array'), method='linear', theta_min=0, angle_res=1, close_phi=False, deg=True, r=1, show=True, out='values_map')[source]

Create a uv-texture map. | with specified angular resolution (°) and with positive z-axis as normal. | u corresponds to phi [0° - 360°] | v corresponds to theta [0° - 180°], (or [-90° - 90°])

Args:
theta:
Float, int or ndarray
Angle with positive z-axis.
Values corresponding to 0 and 180° must be specified!
phi:
None, optional
Float, int or ndarray
Angle around positive z-axis starting from x-axis.
If not None: values corresponding to 0 and 360° must be specified!
values:
None
ndarray or mesh of values at (theta, phi) locations.
input_types:
(‘array’,’array’), optional
Specification of type of input of (angles,values)
method:
‘linear’, optional
Interpolation method.
(supported scipy.interpolate.griddata methods:
‘nearest’, ‘linear’, ‘cubic’)
theta_min:
0, optional
If 0: [0, 180]; If -90: theta range = [-90,90]
close_phi:
False, optional
Make phi angles array closed (full circle).
angle_res:
1, optional
Resolution in degrees.
deg:
True, optional
Type of angle input (True: degrees, False: radians).
r:
1, optional
Float, int or ndarray
radius
show:
True, optional
Plot results.
out:
‘values_map’, optional
Specifies output: “return eval(out)”
Returns:
returns:

as specified by :out:.

luxpy.toolboxes.iolidfiles.save_texture(filename, tex, bits=16, transpose=True)[source]

Save 16 bit grayscale PNG image of uv-texture.

Args:
filename:
Filename of output image.
tex:
ndarray float uv-texture.
transpose:
True, optional
If True: transpose tex (u,v) to set u as columns and v as rows
in texture image.
Returns:
None:

Note:
Texture is rescaled to max = 1 and saved as uint16.
–> Before using uv_map: rescale back to set ‘normal’ to 1.
luxpy.toolboxes.iolidfiles.draw_lid(LID, grid_interp_method='linear', theta_min=0, angle_res=1, ax=None, projection='2d', polar_plot_Cx_planes=[0, 90], use_scatter_plot=False, plot_colorbar=True, legend_on=True, plot_luminaire_position=True, plot_diagram_top=0.001, out='ax', **plottingkwargs)[source]

Draw the light intensity distribution.

Args:
LID:
dict with IES or LDT file data.
(obtained with iolidfiles.read_lamp_data())
grid_interp_method:
‘linear’, optional
Interpolation method for (theta,phi)-grid of normalized luminous intensity values.
(supported scipy.interpolate.griddata methods:
‘nearest’, ‘linear’, ‘cubic’)
theta_min:
0, optional
If 0: [0, 180]; If -90: theta range = [-90,90]
angle_res:
1, optional
Resolution in degrees.
ax:
None, optional
If None: create new 3D-axes for plotting.
projection:
‘2d’, optional
If ‘3d’ make 3 plot
If ‘2d’: make polar plot(s). [not yet implemented (25/03/2021)]
polar_plot_Cx_planes:
[0,90], optional
Plot (Cx)-(Cx+180) planes; eg. [0,90] will plot C0-C180 and C90-C270 planes in 2D polar plot.
use_scatter_plot:
False, optional
If True: use plt.scatter for plotting intensity values in 3D plot.
If False: use plt.plot_surface for plotting in 3D plot.
plot_colorbar:
True, optional
Plot colorbar representing the normalized luminous intensity values in the LID 3D plot.
legend_on:
True, optional
If True: plot legend on polar plot (no legend for 3D plot!).
plot_luminaire_position:
True, optional
Plot the position of the luminaire (0,0,0) in the 3D graph as a red diamond.
plot_diagram_top:
1e-3, optional
Plot the top of the polar diagram (True).
If None: automatic detection of non-zero intensity values in top part.
If float: automatic detection of intensity values larger than max__intensity*float in top part.
(if smaller: don’t plot top.)
out:
‘ax’, optional
string with variable to return
default: ax handle to plot.
Returns:
returns:
Whatever requested as determined by the string in :out:
luxpy.toolboxes.iolidfiles.render_lid(LID='./data/luxpy_test_lid_file.ies', sensor_resolution=100, sensor_position=[0, -1, 0.8], sensor_n=[0, 1, -0.2], fov=(90, 90), Fd=2, luminaire_position=[0, 1.3, 2], luminaire_n=[0, 0, -1], wall_center=[0, 2, 1], wall_n=[0, -1, 0], wall_width=4, wall_height=2, wall_rho=1, floor_center=[0, 1, 0], floor_n=[0, 0, 1], floor_width=4, floor_height=2, floor_rho=1, grid_interp_method='linear', angle_res=5, theta_min=0, ax3D=None, ax2D=None, join_axes=True, legend_on=True, plot_luminaire_position=True, plot_lumiaire_rays=False, plot_luminaire_lid=True, plot_sensor_position=True, plot_sensor_pixels=True, plot_sensor_rays=False, plot_wall_edges=True, plot_wall_luminance=True, plot_wall_intersections=False, plot_floor_edges=True, plot_floor_luminance=True, plot_floor_intersections=False, out='Lv2D')[source]

Render a light intensity distribution.

Args:
LID:
dict with IES or LDT file data or string with path/filename;
or String or StringIO object with IES or LDT data.
(dict should be obtained with iolidfiles.read_lamp_data())
sensor_resolution:
100, optional
Number of sensor ‘pixels’ along each dimension.
sensor_position:
[0,-1,0.8], optional
x,y,z position of the sensor ‘focal’ point (is located Fd meters behind actual sensor plane)
sensor_n:
[0,1,-0.2], optional
Sensor plane surface normal
fov:
(90,90), optional
Field of view of sensor image in degrees.
Fd:
2, optional
‘Focal’ distance in meter. Sensor center is located Fd meter away from
:sensor_position:
luminaire_position:
[0,1.3,2], optional
x,y,z position of the photometric equivalent point source
luminaire_n:
[0,0,-1], optional
Orientation of lumaire LID (default points downward along z-axis away from source)
wall_center:
[0,2,1], optiona
x,y,z position of the back wall
wall_n:
[0,-1,0], optional
surface normal of wall
wall_width:
4, optional
width of wall (m)
wall_height:
2, optional
height of wall (m)
wall_rho:
1, optional
Diffuse (Lambertian) reflectance of wall.
floor_center:
[0,1,0], optiona
x,y,z position of the floor
floor_n:
[0,0,1], optional
surface normal of floor
floor_width:
4, optional
width of floor (m)
floor_height:
2, optional
height of floor (m)
floor_rho:
1, optional
Diffuse (Lambertian) reflectance of floor.
grid_interp_method:
‘linear’, optional
Interpolation method for (theta,phi)-grid of normalized luminous intensity values.
(supported scipy.interpolate.griddata methods:
‘nearest’, ‘linear’, ‘cubic’)
theta_min:
0, optional
If 0: [0, 180]; If -90: theta range = [-90,90]
Only used when generating a plot of the LID in the 3D graphs.
angle_res:
1, optional
Angle resolution in degrees of LID sampling.
Only used when generating a plot of the LID in the 3D graphs.
ax3D,ax2D:
None, optional
If None: create new 3D- or 2D- axes for plotting.
If join_axes == True: try and combine two axes on same figure.
If False: don’t plot..
legend_on:
False, optional
plot legend.
plot_luminaire_position:
True, optional
Plot the position of the luminaire (0,0,0) in the graph as a red diamond.
plot_X…:
VArious options to customize plotting. Mainly allows for plotting of
additional info such as plane-ray intersection points, sensor pixels,
sensor-to-plane rays, plane-to-luminaire rays, 3D plot of LID, etc.
out:
‘Lv2D’, optional
string with variable to return
default: variable storing an grayscale image of the rendered LID.
Returns:
returns:
Whatever requested as determined by the string in :out:
luxpy.toolboxes.iolidfiles.luminous_intensity_to_luminous_flux(phis, thetas, I, interp=False, dp=1, dt=1, use_RBFInterpolator=True)[source]

Calculate luminous flux from luminous intensity values.

Args:
phis:
Array [N,] of Phi angles in degrees for which intensity values are available.
thetas:
Array [M,] of Theta angles in degrees for which intensity values are available.
I:
Array [N,M] of luminous intensity values (in cd).
interp:
False, optional
If True interpolate I for new phis [0,360] with :dp: spacing and new thetas [0,360] with :dt: spacing
dp:
Angle spacing of new phi angles upon interpolation.
dt:
Angle spacing of new theta angles upon interpolation.
use_RBFInterpolator:
If True: use slower more smooth scipy.interpolate.RBFInterpolator
If False: use scipy.interpolate.LinearNDInterpolator
Returns:
flux:
Luminous flux (in lm).

spectro/

py:
  • __init__.py

  • spectro.py

namespace:

luxpy.spectro

Package for spectral measurements

Supported devices:

  • JETI: specbos 1211, etc.

  • OceanOptics: QEPro, QE65Pro, QE65000, USB2000, USB650,etc.

get_spd():

wrapper function to measure a spectral power distribution using a spectrometer of one of the supported manufacturers.

Notes

  1. For info on the input arguments of get_spd(), see help for each identically named function in each of the subpackages.

  2. The use of jeti spectrometers requires access to some dll files (delivered with this package).

  3. The use of oceanoptics spectrometers requires the manual installation of pyseabreeze, as well as some other ‘manual’ settings. See help for oceanoptics sub-package.

luxpy.toolboxes.spectro.init(manufacturer)[source]

Import module for specified manufacturer. Make sure everything (drivers, external packages, …) required is installed!

luxpy.toolboxes.spectro.get_spd(manufacturer='jeti', dvc=0, Tint=0, autoTint_max=None, close_device=True, out='spd', **kwargs)[source]

Measure a spectral power distribution using a spectrometer of one of the supported manufacturers.

Args:
manufacturer:
‘jeti’ or ‘oceanoptics’, optional
Manufacturer of spectrometer (ensures the correct module is loaded).
dvc:
0 or int or spectrometer handle, optional
If int: function will try to initialize the spectrometer to
obtain a handle. The int represents the device
number in a list of all detected devices of the manufacturer.
Tint:
0 or Float, optional
Integration time in seconds. (if 0: find best integration time, but < autoTint_max).
autoTint_max:
Limit Tint to this value when Tint = 0.
close_device:
True, optional
Close spectrometer after measurement.
If ‘dvc’ not in out.split(‘,’): always close!!!
out:
“spd” or e.g. “spd,dvc,Errors”, optional
Requested return.
kwargs:
For info on additional input (keyword) arguments of get_spd(),
see help for each identically named function in each of the subpackages.
Returns:
spd:
ndarray with spectrum. (row 0: wavelengths, row1: values)
dvc:
Device handle, if succesfull open (_ERROR: failure, nan: closed)
Errors:
Dict with error messages.

sherbrooke_spectral_indices/

py:
  • __init__.py

  • sherbrooke_spectral_indices_2013.py

namespace:

luxpy.sherbrooke_spectral_indices

Module for the calculation of the Melatonin Suppression Index (MSI), the Induced Photosynthesis Index (IPI) and the Star Light Index (SLI) ———————————————————————

spd_to_msi():

calculate Melatonin Suppression Index from spectrum.

spd_to_ipi():

calculate Induced Photosynthesis Index from spectrum.

spd_to_sli():

calculate Star Light Index from spectrum.

References:

1. Aubé M, Roby J, Kocifaj M (2013) Evaluating Potential Spectral Impacts of Various Artificial Lights on Melatonin Suppression, Photosynthesis, and Star Visibility. PLoS ONE 8(7): e67798 https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0067798

Created on Fri Jun 11 13:46:33 2021

@author: ksmet1977 [at] gmail dot com

luxpy.toolboxes.sherbrooke_spectral_indices.spd_to_msi(spd, force_5nm_interval=True)[source]

Calculate Melatonin Suppression Index from spectrum.

Args:
spd:
ndarray with spectral data (first row are wavelengths)
force_5nm_interval:
True, optional
If True: interpolate spd to 5nm wavelengths intervals, else: keep as in spd.
Returns:
msi:
ndarray with Melatonin Suppression Index values for each input spectrum.
luxpy.toolboxes.sherbrooke_spectral_indices.spd_to_ipi(spd, force_5nm_interval=True)[source]

Calculate Induced Photosynthesis Index from spectrum.

Args:
spd:
ndarray with spectral data (first row are wavelengths)
force_5nm_interval:
True, optional
If True: interpolate spd to 5nm wavelengths intervals, else: keep as in spd.
Returns:
msi:
ndarray with Induced Photosynthesis Index values for each input spectrum.
luxpy.toolboxes.sherbrooke_spectral_indices.spd_to_sli(spd, force_5nm_interval=True)[source]

Calculate Star Light Index from spectrum.

Args:
spd:
ndarray with spectral data (first row are wavelengths)
force_5nm_interval:
True, optional
If True: interpolate spd to 5nm wavelengths intervals, else: keep as in spd.
Returns:
msi:
ndarray with Star Light Index values for each input spectrum.

spectral_mismatch_and_uncertainty/

py:
  • __init__.py

  • detector_spectral_mismatch.py

namespace:

luxpy.spectral_mismatch_and_uncertainty

Toolbox for spectral mismatch and measurement uncertainty calculations

spectral_mismatch_and_uncertainty/detector_spectral_mismatch.py

f1prime():

Determine the f1prime spectral mismatch index.

get_spectral_mismatch_correct_factors():

Determine the spectral mismatch factors.

Reference

  1. Krüger, U. et al. GENERAL V(λ) MISMATCH - INDEX HISTORY, CURRENT STATE, NEW IDEAS (TechnoTeam)


Created on Tue Aug 31 10:46:02 2021

@author: ksmet1977 [at] gmail.com

luxpy.toolboxes.spectral_mismatch_and_uncertainty.f1prime(s_detector, S_C='A', cieobs='1931_2', s_target_index=2, wlr=None, interp_kind='linear', out='f1p')[source]

Determine the f1prime spectral mismatch index.

Args:
s_detector:
ndarray with detector spectral responsivity (first row = wavelengths)
S_C:
‘A’, optional
Standard ‘calibration’ illuminant.
string specifying the illuminant to use from the luxpy._CIE_ILLUMINANTS dict
or ndarray with standard illuminant spectral data.
cieobs:
‘1931_2’, optional
string with CIE standard observer color matching functions to use (from luxpy._CMF)
or ndarray with CMFs (s_target_index > 0)
or target spectral responsivity (s_target_index == 0)
(first row contains the wavelengths).
s_target_index:
2, optional
if > 0: index into CMF set (1->’xbar’, 2->’ybar’=’Vlambda’, 3->’zbar’)
if == 0: cieobs is expected to contain an ndarray with the target spectral responsivity.
wlr:
None, optional
Wavelength range (None, ndarray or [start, stop, spacing]).
If None: the wavelengths of the detector are used throughout.
interp_kind:
‘linear’, optional
Interpolation type to use when interpolating function to specified wavelength range.
out:
‘f1p’, optional
Specify requested output of function,
e.g. ‘f1p,s_rel’ also outputs the normalized target spectral responsitivity.
Returns:
f1p:
ndarray (vector) with f1prime values for each of the spectral responsivities in s_detector.
luxpy.toolboxes.spectral_mismatch_and_uncertainty.get_spectral_mismatch_correction_factors(S_Z, s_detector, S_C='A', cieobs='1931_2', s_target_index=2, wlr=None, interp_kind='linear', out='F')[source]

Determine the spectral mismatch factors.

Args:
S_Z:
ndarray with spectral power distribution of measured light source (first row = wavelengths).
s_detector:
ndarray with detector spectral responsivity (first row = wavelengths)
S_C:
‘A’, optional
Standard ‘calibration’ illuminant.
string specifying the illuminant to use from the luxpy._CIE_ILLUMINANTS dict
or ndarray with standard illuminant spectral data.
cieobs:
‘1931_2’, optional
string with CIE standard observer color matching functions to use (from luxpy._CMF)
or ndarray with CMFs (s_target_index > 0)
or target spectral responsivity (s_target_index == 0)
(first row contains the wavelengths).
s_target_index:
2, optional
if > 0: index into CMF set (1->’xbar’, 2->’ybar’=’Vlambda’, 3->’zbar’)
if == 0: cieobs is expected to contain an ndarray with the target spectral responsivity.
wlr:
None, optional
Wavelength range (ndarray or [start, stop, spacing]).
If None: use the wavelength range of S_Z.
interp_kind:
‘linear’, optional
Interpolation type to use when interpolating function to specified wavelength range.
out:
‘F’, optional
Specify requested output of function,
e.g. ‘F,f1p’ also outputs the f1prime spectral mismatch index.
Returns:
F:
ndarray with correction factors for each of the mesured spectra (rows)
and spectral responsivities in s_detector (columns).

technoteamlmk/

py:
  • __init__.py

  • TechnoTeamLMK.py

namespace:

luxpy.technoteamlmk

Created on Sat Nov 26 10:32:55 2022

@author: ksmet1977

luxpy.toolboxes.technoteamlmk.get_labsoft_path()[source]
luxpy.toolboxes.technoteamlmk.define_lens(lens_type, name, focusFactors=None)[source]

Define a technoteam lens

class luxpy.toolboxes.technoteamlmk.lmkActiveX(camera, lens, focusfactor=None, autoscan=True, autoexposure=True, modfrequency=60, maxtime=10, labsoft_camera_path='C:/TechnoTeam/LabSoft/Camera/', verbosity=None)[source]

Class for TechnoTeam LMK camera basic control

All supported camera/lens combinations are defined in: _CAMERAS To add new ones (or new lenses): edit the_CAMERAS dict

lmk = None
verbosity_levels = {0: 'none', 1: 'minimal', 2: 'moderate (default)', 3: 'Detailed', 4: 'All'}
workingImage = None
errorFlag = None
colorSpace = {'C*h*_ab': 2048, 'C*h*s*_uv': 512, 'CIE-RGB': 1, 'EBU-RGB': 4, 'HSI': 8192, 'HSV': 4096, 'L*a*b*': 1024, 'L*u*v*': 256, 'LWS': 65536, 'Lrg': 32768, 'Lu_v_': 128, 'Luv': 64, 'Lxy': 32, 'S-RGB': 2, 'WST': 16384, 'XYZ': 16}
imageType = {'Camera': -3, 'Color': -1, 'Evaluation[1]': 0, 'Evaluation[2]': 1, 'Evaluation[3]': 2, 'Evaluation[4]': 3, 'Evaluation[5]': 4, 'Luminance': -2}
regionType = {'AND': {'identifier': 9, 'points': 2}, 'Circle': {'identifier': 2, 'points': 2}, 'CircularRing': {'identifier': 6, 'points': 3}, 'Ellipse': {'identifier': 5, 'points': 3}, 'Line': {'identifier': 1, 'points': 2}, 'OR': {'identifier': 7, 'points': 2}, 'Polygon': {'identifier': 3, 'points': 3}, 'Polyline': {'identifier': 4, 'points': 3}, 'Rectangle': {'identifier': 0, 'points': 2}, 'XOR': {'identifier': 8, 'points': 2}}
statisticType = {'bitHistogramGrey': 6, 'bitHistorgramColor': 7, 'chromaticityAreaColor': 33, 'chromaticityLineColor': 31, 'contrastGrey': 40, 'histogramColor': 5, 'histogramGrey': 4, 'integralColor': 23, 'integralGrey': 22, 'integralNegativeColor': 38, 'integralNegativeGrey': 36, 'lightArcGrey': 26, 'luminanceGrey': 20, 'projectionColor': 9, 'projectionGrey': 8, 'sectionalColor': 3, 'sectionalGrey': 2, 'spiralWoundGrey': 28, 'standardColor': 1, 'standardGrey': 0, 'symbolColor': 25, 'symbolGrey': 24, 'symbolNegativeColor': 39, 'threeDviewGrey': 34}
captureStartRatio = 10
captureMaxTries = 3
captureFactor = 3
captureCountPic = 1
captureDefaultMaxTries = 3
boolStr = ['False', 'True']
camera = {'ttf8847': {'lenses': {'x12mm': {'focusFactors': {'TTScale0_3': 0, 'TTScale0_5': 1, 'TTScale1': 2, 'TTScale3': 3, 'TTScaleInfinite': 4}, 'name': 'o95653f12'}, 'x25mm': {'focusFactors': {'TTScale00': 0, 'TTScale01': 1, 'TTScale02': 2, 'TTScale03': 3, 'TTScale04': 4, 'TTScale05': 5, 'TTScale06': 6, 'TTScale07': 7, 'TTScale08': 8, 'TTScale09': 9, 'TTScale10': 10, 'TTScale11': 11, 'TTScale12': 12, 'TTScale13': 13, 'TTScale14': 14, 'TTScale15': 15, 'TTScale16': 16, 'TTScale17': 17, 'TTScale18': 18, 'TTScale19': 19}, 'name': 'oB225463f25'}, 'x50mm': {'focusFactors': {'TTScale00': 0, 'TTScale01': 1, 'TTScale02': 2, 'TTScale03': 3, 'TTScale04': 4, 'TTScale05': 5, 'TTScale06': 6, 'TTScale07': 7, 'TTScale08': 8, 'TTScale09': 9, 'TTScale10': 10, 'TTScale11': 11, 'TTScale12': 12, 'TTScale13': 13, 'TTScale14': 14, 'TTScale15': 15, 'TTScale16': 16, 'TTScale17': 17, 'TTScale18': 18, 'TTScale19': 19}, 'name': 'oC216813f50'}, 'x6_5mm': {'name': 'o13196f6_5'}}, 'name': 'ttf8847'}, 'tts20035': {'lenses': {'x12f50mm_2mm': {'name': 'oTTNED-12_50_2mmEP'}, 'x12f50mm_4mm': {'name': 'oTTNED-12_50_4mmEP'}, 'x12mm_TTC_163': {'name': 'oTTC-163_D0224'}, 'x50mm_M00442': {'focusFactors': {'TTScale00': 0, 'TTScale01': 1, 'TTScale02': 2, 'TTScale03': 3, 'TTScale04': 4, 'TTScale05': 5, 'TTScale06': 6, 'TTScale07': 7, 'TTScale08': 8, 'TTScale09': 9, 'TTScale10': 10, 'TTScale11': 11, 'TTScale12': 12, 'TTScale13': 13, 'TTScale14': 14, 'TTScale15': 15, 'TTScale16': 16, 'TTScale17': 17, 'TTScale18': 18, 'TTScale19': 19, 'TTScale20': 20, 'TTScale21': 21, 'TTScale22': 22, 'TTScale23': 23}, 'name': 'oM00442f50'}, 'xvr': {'name': 'oTTC-163_D0224'}}, 'name': 'tts20035'}}
verbosity = 2
classmethod show_labsoft_gui(show=3)[source]
classmethod open_lmk_labsoft_connection(objectiveCalibrationPath=None, show_gui=3)[source]

Initializes a connection to LMK LabSoft.

Input:
  • objectiveCalibrationPath: path to calibration file

Output:
  • answer: 0=no error, other=error code

classmethod close_lmk_labsoft_connection(open_dialog=0)[source]

Closes the connection to LMK LabSoft and LabSoft itself.

Input:
  • open_dialog:

    If 0: | No dialog window. Else: | Opens a dialog window in the Labsoft application.

    The user can choose whether they wish to save
    the current state or not or or cancel
    the closing of LabSoft.
Output:
  • answer: 0=no error, other=error code

classmethod delete(open_dialog=0)[source]

Delete lmk class object (close connection to labsoft)

classmethod setWorkingImage(w)[source]

Set the current working image

classmethod display_error_info(err_code, process_id='')[source]

Get the info for err_code and print

classmethod get_filter_wheel_info()[source]

Get max. number of filter wheels and their names.

classmethod measureColorMultipic(countPic=None, defaultMaxTries=None)[source]

Capture a ColorMultiPicture

classmethod saveImage(folderXYZ, fileNameXYZ)[source]

Save the captured image currently as workingImage to the specified file and folder

classmethod loadImage(pathXYZ)[source]

Load a previously captured image from a specific path

classmethod getStatistic(statisticType, regionName, colorSpace)[source]

Get Cmin, Cmax, Cmean, Cvar for specified colorSpace for a specific region

classmethod get_color_autoscan_times()[source]
classmethod capture_X_map(folder, fileName, X_type='XYZ', startRatio=None, factor=None, countPic=None, defaultMaxTries=None, autoscan=None, autoexposure=None, modfrequency=None, maxtime=None)[source]

Measure XYZ / Y image and save as .pcf / .pf image. (parameters as set in class attributes)

classmethod captureXYZmap(folderXYZ, fileNameXYZ, startRatio=None, factor=None, countPic=None, defaultMaxTries=None, autoscan=None, autoexposure=None, modfrequency=None, maxtime=None)[source]

Measure XYZ image and save as .pcf image. (parameters as set in class attributes)

classmethod captureYmap(folderY, fileNameY, startRatio=None, factor=None, countPic=None, defaultMaxTries=None, autoscan=None, autoexposure=None, modfrequency=None, maxtime=None)[source]

Measure Y image and save as .pf image. (parameters as set in class attributes)

classmethod createEllips(centerPt, width, height, regionName)[source]

Create an ellips with a: | - centerPoint defined by centerPt (contains x and y value) | - certain width (horizontal axis) | - certain height (vertical axis) | - give the ellips region a regionName (string)

Function returns the regionIndex of the ellips

classmethod createPolygon(pointsXY, regionName)[source]

Create a polygon with: | - vertices specified in pointsXY (x->width, y->height) | - give the polygon region a regionName (string)

Function returns the regionIndex of the polygon

classmethod createRectangle(topleftXY, bottomrightXY, regionName)[source]

Create a Rectangle spanning: | - the top-left and bottom-right vertices | - give the rectangle region a regionName (string)

Function returns the regionIndex of the rectangle

classmethod deleteRegionByName(regionName)[source]

Delete a Region by regioName

classmethod getRegionIndexByName(regionName)[source]

Return the index of a region with a region name set to regionName.

classmethod createStatisticObjectOfRegion(regionName, statisticType)[source]

Create a color statistic object | call as follows: createStatisticObjectOfRegion(‘regionTestName’,statisticType[‘standardColor’])

classmethod selectRegionByIndex(ind, s)[source]

Select a region by its index number s defines wether the region is selected or deselected (true or false)

classmethod selectRegionByName(regionName, s)[source]

Select a region by its region name, s defines wether the region is selected or deselected (true or false)

classmethod setIntegrationTime(wishedTime)[source]

Set integration time. | [int32, double] LMKAxServer::iSetIntegrationTime (double _dWishedTime, double & _drRealizedTime)

Parameters:
_dWishedTime:

Wished integration time :_drRealizedTime: Realized integration time

classmethod getIntegrationTime()[source]

Get integration time. | [int32, double, double, double, double, double] LMKAxServer::iGetIntegrationTime | (handle, double _drCurrentTime, double & _drPreviousTime, | double & _drNextTime, double & _drMinTime, double & _drMaxTime ) | | Determine current exposure time and other time parameters.

Parameters:
_drCurrentTime:

Current integration time

_drPreviousTime:

Next smaller (proposed) time

_drNextTime:

Next larger (proposed) time

_drMinTime:

Minimal possible time

_drMaxTime:

Maximal possible time

classmethod set_autoscan(autoscan=None)[source]

Set auto scan.

If the option Autoscan is on, then the exposure time of the camera is automatically determined before each capture by the autoscan algorithm. In the case of a color capture the autoscan algorithm is applied to each color filter separately.

classmethod get_autoscan()[source]

Get auto scan.

classmethod set_autoexposure(autoexposure=None)[source]

Set Automatic-Flag for all exposure times.

If this flag is set, all exposure times will automatically adjusted if camera exposure time is reduced or enlarged.

classmethod get_autoexposure()[source]

Get Automatic-Flag for all exposure times.

classmethod set_focusfactor(focusfactor=None)[source]

Set focus factor of lens

classmethod get_focusfactor()[source]

Get focus factor of lens

classmethod set_max_exposure_time(maxtime=None)[source]

Set the maximum possible exposure time. | int LMKAxServer::iSetMaxCameraTime ( double _dMaxCameraTime ) | | The maximum values is of course restricted by camera properties. | But you can use an even smaller time to avoid to long meausrement times.

Parameters:
_dMaxCameraTime:

Wished value

maxCameraTime:

classmethod get_max_exposure_time()[source]

Get the maximum possible exposure time.

classmethod set_mod_frequency(modfrequency=None)[source]

Set the frequency of modulated light. | int LMKAxServer::iSetModulationFrequency ( double _dModFrequency ) | | If the light source is driven by alternating current, | there are some restriction for the exposure times. | Please inform the program about the modulation frequency.

Parameters:
_dModFrequency:

Frequency of light source. 0 if no modulation is to be concerend

classmethod get_mod_frequency()[source]

Get the frequency setting of modulated light.

classmethod init()[source]

init lmk ActiveX

classmethod set_converting_units(units_name='L', units='cd/m²', units_factor=1)[source]

Set the converting units (units_name, units, units_factor)

classmethod get_converting_units()[source]

Get the converting units (units_name, units, units_factor)

classmethod set_verbosity(value)[source]
classmethod checkForError()[source]
luxpy.toolboxes.technoteamlmk.kill_lmk4_process(verbosity=1)[source]
luxpy.toolboxes.technoteamlmk.read_pcf(fname)[source]

Read a TechnoTeam PCF image. (!!! output = float32 CIE-RGB !!!)

luxpy.toolboxes.technoteamlmk.write_pcf(fname, data)[source]

Write a basic TechnoTeam PCF image. (!!! output = float32 CIE-RGB !!!)

luxpy.toolboxes.technoteamlmk.plot_pcf(img, to_01_range=True, ax=None)[source]

Plot a TechnoTeam PCF image.

luxpy.toolboxes.technoteamlmk.pcf_to_xyz(pcf_image)[source]

Convert a TechnoTeam PCF image to XYZ

luxpy.toolboxes.technoteamlmk.xyz_to_pcf(xyz)[source]

Convert an xyz image to a TechnoTeam PCF

luxpy.toolboxes.technoteamlmk.ciergb_to_xyz(rgb)[source]

Convert CIE-RGB to XYZ

luxpy.toolboxes.technoteamlmk.xyz_to_ciergb(xyz)[source]

Convert XYZ to CIE-RGB

class luxpy.toolboxes.technoteamlmk.Defisheye(infile, **kwargs)[source]

fov: fisheye field of view (aperture) in degrees pfov: perspective field of view (aperture) in degrees xcenter: x center of fisheye area ycenter: y center of fisheye area radius: radius of fisheye area angle: image rotation in degrees clockwise dtype: linear, equalarea, orthographic, stereographic format: circular, fullframe

_map(i, j, ofocinv, dim)[source]
convert(image=None, outfile=None)[source]
_start_att(vkwargs, kwargs)[source]

Starting atributes

stereoscopicviewer/

py:
  • __init__.py

  • /harfang/

  • harfang_viewer.py

namespace:

luxpy.stereoscopicviewer

luxpy.toolboxes.stereoscopicviewer.CreateSphereModel(decl: ~.VertexLayout = None, radius: float = 1, subdiv_x: int = 256, subdiv_y: int = 256, flip_normals=False)[source]

Create a Sphere Model.

Args:
decl:
VertexLayout declaration
If None: the following is created: PosFloatNormalFloatTexCoord0Float
(if using texture images: this is the one that is required)
radius:
1, optional
Radius of sphere
subdiv_x:
256, optional
Number of subdivisions along sphere axis
subdiv_y:
256, optional
Number of subdivision along sphere circumference.
flip_normals:
False, optional
If True: flip the direction of the normals of the vertices.
Returns:
Model:
Harfang Sphere Model
luxpy.toolboxes.stereoscopicviewer.CreatePlaneModel(decl: ~.VertexLayout = None, width: float = 1, height: float = 1, subdiv_x: int = 256, subdiv_y: int = 256, flip_normals=False)[source]

Create a Plane (Quad) Model.

Args:
decl:
VertexLayout declaration
If None: the following is created: PosFloatNormalFloatTexCoord0Float
(if using texture images: this is the one that is required)
width:
1, optional
Width of plane
height:
1, optional
height of plane
subdiv_x:
256, optional
Number of subdivisions along plane height
subdiv_y:
256, optional
Number of subdivision along plane width.
flip_normals:
False, optional
If True: flip the direction of the normals of the vertices.
Returns:
Model:
Harfang Plane Model
luxpy.toolboxes.stereoscopicviewer.create_material(prg_ref, res, ubc=None, orm=None, slf=None, tex=None, blend_mode=5, faceculling=2)[source]

Create a Harfang material with specified color and texture properties.

Args:
prg_ref:
shader program from assets (ref)
res:
resources
ubc:
uBaseOpacityColor
orm:
uOcclusionRoughnessMetalnessColor
slf:
uSelfColor
tex:
uSelfMap texture (if not None: any color input is ignored !)
blendmode:
hg.BM_Opaque, optional
Blend mode
faceculling:
hg.FC_CounterClockwise
Sets face culling (hg.FC_CounterClockwise, hg.FC_Clockwise, hg.FC_Disabled)
Returns:
mat:
Harfang material (note that material program variant has been updated
accordingly; see: hg.UpdateMaterialPipelineProgramVariant)
luxpy.toolboxes.stereoscopicviewer.update_material_texture(node, res, tex, mat_idx=0, name='uSelfMap', stage=4, texListPreloaded=None)[source]

Update the texture of a Harfang material.

Args:
node:
Node to which material belongs
res:
Pipeline resources
tex:
New texture
mat_idx:
0, optional
index of material in material table of object
name:
“uSelfMap”, optional
name of material type (depends on shader used; the default is for the pbr shader)
stage:
4, optional
Render stage: depends on features, shader, … (see “writing a pipeline shader” in Harfang documentation)
texListPreloaded:
None, optional
List with preloaded textures (to speed up texture update as it doesn’t need to be read from file anymore while looping over frames)
Returns:
mat:
Harfang material (note that material program variant has been updated
accordingly; see: hg.UpdateMaterialPipelineProgramVariant)
luxpy.toolboxes.stereoscopicviewer.makeColorTex(color, texHeight=100, texWidth=100, save=None)[source]

Make a full single-color texture.

Args:
color:
uint8 RGB(A; ignored) color
texHeight,texWidth:
Height and width of texture
save:
None, optional
File path to save texture to.
If not None: save texture in supplied filepath.
Returns:
text:
numpy ndarray with RGB texture.
luxpy.toolboxes.stereoscopicviewer.split_SingleSphericalTex(file, left_layout_pos='bottom')[source]

Split Image into left eye and right eye subimages

Args:
file:
Image file path
left_layout_pos:
Position of left eye sub-image in image specified in file.
options: ‘bottom’, ‘top’, ‘left’, ‘right’, None
If None: there is no left and right subimage in
the image specified in filePath -> don’t split
Returns:
file_L, file_R:
filepaths to left and right eye sub-images
(each indicated respectively by ‘_L’, ‘_R’ appended to the filename.)
class luxpy.toolboxes.stereoscopicviewer.Shader(resources, assetPath='core/shader/pbr.hps')[source]
class luxpy.toolboxes.stereoscopicviewer.Scene(canvasColorI=[0, 0, 0, 255], ambientEnvColorI=[0, 0, 0, 0])[source]
class luxpy.toolboxes.stereoscopicviewer.Camera(scene, position=[0, 0, 0], rotation=[0, 0, 0], zNear=0.01, zFar=5000, fov=60)[source]
class luxpy.toolboxes.stereoscopicviewer.Material(shader_prgRef, resources, uSelfMapTexture=None, uSelfMapTextureListPreloaded=None, uBaseOpacityColor=[1.0, 1.0, 1.0, 1.0], uSelfColor=[1.0, 1.0, 1.0, 1.0], uOcclusionRoughnessMetalnessColor=[0.0, 0.0, 0.0, 1.0], blend_mode=5, faceculling=2)[source]
createMaterial(uSelfMapTexture=None, uBaseOpacityColor=None, uSelfColor=None, uOcclusionRoughnessMetalnessColor=None, blend_mode=None, faceculling=None)[source]

Create a Harfang material with specified color and texture properties.

Args:
ubc:
uBaseOpacityColor
orm:
uOcclusionRoughnessMetalnessColor
slf:
uSelfColor
tex:
uSelfMap texture (if not None: any color input is ignored !)
blendmode:
hg.BM_Opaque, optional
Blend mode
faceculling:
hg.FC_CounterClockwise
Sets face culling (hg.FC_CounterClockwise, hg.FC_Clockwise, hg.FC_Disabled)
LoadTexturesFromFiles(texFileList, return_type=<class 'list'>)[source]

Load textures specified in texFileList (return_type is either a list or dict)

class luxpy.toolboxes.stereoscopicviewer.Screen(scene, shader_prgRef, resources, geometry='sphere', aspect_ratio=[19, 16], radius=4, subdiv_x=256, subdiv_y=256, uSelfMapTexture=None, uSelfMapTextureListPreloaded=None, uBaseOpacityColor=[1.0, 1.0, 1.0, 1.0], uSelfColor=[1.0, 1.0, 1.0, 1.0], uOcclusionRoughnessMetalnessColor=[0.0, 0.0, 0.0, 1.0], blend_mode=5, position=[0, 0, 0], rotation=[0, 0, 0])[source]
updateScreenMaterial(uSelfMapTexture=None, uSelfColor=None, uBaseOpacityColor=None, uOcclusionRoughnessMetalnessColor=None, blend_mode=None)[source]

Update Screen Material

Args:

uBaseOpacityColor:
None, optional
uBaseOpacityColor
uOcclusionRoughnessMetalnessColor:
None, optional
uOcclusionRoughnessMetalnessColor
uSelfColor:
None, optional
uSelfColor
uSelfMapTexture:
None, optional
uSelfMap texture (if not None: any color input is ignored !)
blend_mode:
None, optional
Blend mode
Note:
  • If None: defaults set at initialization are used.

updateScreenMaterialTexture(uSelfMapTexture=None, uSelfMapTextureListPreloaded=None)[source]

Update the texture of the Harfang material.

Args:
uSelfMapTexture:
New texture (string with filename)
uSelfMapTextureListPreloaded:
None, optional
List with preloaded textures (to speed up texture update as it doesn’t need to be read from file anymore while looping over frames)
class luxpy.toolboxes.stereoscopicviewer.Eye(eye, vrFlag=True, shader_assetPath='core/shader/pbr.hps', scene_canvasColorI=[0, 0, 0, 255], scene_ambientEnvColorI=[0, 0, 0, 0], cam_pos=[0, 0, 0], cam_rot=[0, 0, 0], cam_zNear=0.01, cam_zFar=100, cam_fov=60, screen_geometry='sphere', screen_aspectRatio=1, screen_radius=10, screen_subdiv_x=256, screen_subdiv_y=256, screen_uSelfMapTexture=None, screen_uSelfMapTextureListPreloaded=None, screen_uBaseOpacityColor=[1.0, 1.0, 1.0, 1.0], screen_uSelfColor=[1.0, 1.0, 1.0, 0], screen_uOcclusionRoughnessMetalnessColor=[0.5, 0.0, 0.0, 1.0], screen_blend_mode=5, screen_pos=[0, 0, 0], screen_rot=[0, 0, 0])[source]
updateScreenMaterial(uSelfMapTexture=None, uBaseOpacityColor=None, uSelfColor=None, uOcclusionRoughnessMetalnessColor=None, blend_mode=None)[source]

Update Screen Material (see Screen.updateScreenMaterial.__doc__)

updateScreenMaterialTexture(uSelfMapTexture=None, uSelfMapTextureListPreloaded=None)[source]

Update Screen MaterialTexture (see Screen.updtateScreenMaterialTexture.__doc__)

SceneForwardPipelinePassViewId_PrepareSceneForwardPipelineCommonRenderData(vid=0)[source]
PrepareSceneForwardPipelineViewDependentRenderData_SubmitSceneToForwardPipeline(vs, vr_eye_rect, isMainScreen=False)[source]
DestroyForwardPipeline()[source]
class luxpy.toolboxes.stereoscopicviewer.HmdStereoViewer(vrFlag=False, vsync=True, multisample=4, cam_fov=60, windowWidth=800, windowHeight=600, windowTitle='Harfang3d - Stereoscopic Viewer', mainScreenIdx=0, screen_geometry='sphere', screen_aspectRatio=[1, 1], screen_radius=10, screen_subdiv_x=256, screen_subdiv_y=256, equiRectImageLeftPos='bottom', equiRectImageLeftIsRight=False, screen_uSelfMapTexture=[None], screen_uSelfMapTextureListPreloaded=[None], screen_uBaseOpacityColor=[[1.0, 1.0, 1.0, 1.0]], screen_uSelfColor=[[1.0, 1.0, 1.0, 1.0]], screen_uOcclusionRoughnessMetalnessColor=[[0.0, 0.0, 0.0, 1.0]], screen_blend_mode=5, screen_position=[0, 0, 0], screen_rotation=[0, 0, 0], pipeFcns=None)[source]
set_texture(screen_uSelfMapTexture, equiRectImageLeftPos=None, equiRectImageLeftIsRight=None, screen_uSelfMapTextureListPreloaded=None)[source]
init_main()[source]

Initialize Input and Window, add folder with compiled assets

shutdown()[source]

Shutdown Pipelines for left and right eyes, Shutdown Render and destroy Window

updateScreenMaterial(uSelfMapTexture=None, equiRectImageLeftIsRight=None, equiRectImageLeftPos=None, uBaseOpacityColor=None, uSelfColor=None, uOcclusionRoughnessMetalnessColor=None, blend_mode=None)[source]

Update Screen Material

Args:
uSelfMapTexture:
None, optional
uSelfMap texture (if not None: any color input is ignored !)
equiRectImageLeftPos:
‘bottom’, optional
Specifier for where in the texture image the left sub-image is located.
options: ‘bottom’, ‘top’, ‘left’, ‘right’, None
If None: there are no separate left/right sub-images in the texture image file.
equiRectImageLeftIsRight:
False, optional
If True: the image for the left and right eye is the same.
uBaseOpacityColor:
None, optional
uBaseOpacityColor
uOcclusionRoughnessMetalnessColor:
None, optional
uOcclusionRoughnessMetalnessColor
uSelfColor:
None, optional
uSelfColor
blend_mode:
None, optional
Blend mode
Note:
  • If None: defaults set at initialization are used.

updateScreenMaterialTexture(uSelfMapTexture=None, equiRectImageLeftIsRight=None, equiRectImageLeftPos=None, uSelfMapTextureListPreloaded=None)[source]

Update the texture of the Harfang material.

Args:
uSelfMapTexture:
New texture (string with filename)
equiRectImageLeftPos:
‘bottom’, optional
Specifier for where in the texture image the left sub-image is located.
options: ‘bottom’, ‘top’, ‘left’, ‘right’, None
If None: there are no separate left/right sub-images in the texture image file.
equiRectImageLeftIsRight:
False, optional
If True: the image for the left and right eye is the same.
uSelfMapTextureListPreloaded:
None, optional
List with preloaded textures (to speed up texture update as it doesn’t need to be read from file anymore while looping over frames)
resetFrameNumber()[source]

Reset the frame number

getFrameNumber()[source]

Get the current frame number

display()[source]

Display the texture (first one from list, use run() to loop through all of them)

run(pipeFcns=None, pipeFcnsUpdate=None, only_once=False, u_delay=None, a_delay=None, autoShutdown=True)[source]

Run through all textures specified at initialization (and do some action) .

Args:
pipeFcns:
None, optional
list of piped functions, one executed after the other
If None: use the defaults. This will cause all textures
specified at initialization to be shown one after the other, with
delay time set by :delay:.
If not None: use this set of user-defined pipeFcns (see code for example use)
pipeFcnsUpdate:
None, optional
Use this list or dictionary to update the pipeFcns specified by :pipeFcns:
This exists to keep e.g. the defaults but only change the ‘action’ part, e.g. to
do a measurement.
only_once:
False, optional
If True: loop through the set of textures once and then stop and shutdown.
u_delay:
None, optional
Delay in seconds for the update function in the pipeFcns.
This delays the initialization of the action function after
an update of the texture (e.g. to give some time display the update on the HMD)
If None: use whatever is set in the (default) pipeFcns update function.
Else override delay if update function as such a kwarg!
a_delay:
None, optional
Delay in seconds for the action function in the pipeFcns.
This delays the update to the next texture after the action
has been started (e.g. to simulate some action duration)
If None: use whatever is set in the (default) pipeFcns action function.
Else override delay if action function as such a kwarg!
frame()[source]

Run everything required to update a frame

generate_defaultPipeFcns(pipeFcnDef=None)[source]

Generate default pipeline functions (if pipeFcnDef not None: use these)

luxpy.toolboxes.stereoscopicviewer.generate_stimulus_tex_list(stimulus_list=None, equiRectImageLeftIsRight=False, equiRectImageLeftPos='bottom', rgba_save_folder=None)[source]

Generate a list of textures Args:

stimulus_list:
None or str or list, optional
If None: generate a preset list of rgb colors: np.array([[1,0,0,1],[0,1,0,1],[0,0,1,1],[1,1,0,1],[1,0,1,1],[0,1,1,1]])*255
If str:
- filename of texture
- or, filename of .iml file with a list of filenames to textures
(first line in path should be: “path” followed by the path to the images in the file list)
If list:
- list of filenames to image textures.
(if not None: any color input is ignored !)
If ndarray with rgba stimuli :
- (equiRectImageLeftIsRight, equiRectImageLeftPos) will be updated to (True, None)
- texture files will be generated in folder
rgba_save_folder:
Folder to save the generated full single-color textures in when stimulus_list is an ndarray or None.
Returns:
stimulus_list:
list of stimuli file textures
(equiRectImageLeftIsRight, equiRectImageLeftPos):
  • equiRectImageLeftIsRight: bool (left image = right image)

  • equiRectImageLeftPos: string or None

luxpy.toolboxes.stereoscopicviewer.generate_rgba_texs_iml(rgb, rgba_save_folder)[source]

Generate rgba texture images, save them in a folder and return a list of texFiles and a .iml file with the paths to the texFiles

luxpy.toolboxes.stereoscopicviewer.get_rgbFromTexPaths(rgbatexFiles)[source]

Get rgb values read from the filenames of the tex-files

luxpy.toolboxes.stereoscopicviewer.getRectMask(roi, shape)[source]

Get a boolean rectangular mask with mask-area determined by the (row,col) coordinates of the top-left & bottom-right corners of the ROI

luxpy.toolboxes.stereoscopicviewer.getRoiImage(img, roi)[source]
luxpy.toolboxes.stereoscopicviewer.get_xyz_from_xyzmap_roi(xyzmap, roi)[source]

Get xyz values of Region-Of-Interest in XYZ-map

luxpy.toolboxes.stereoscopicviewer.get_rgb_from_rgbtexpath(path)[source]

Get rgb values from filename