4.1. Software-related issues
Many of the available EEG and MEG systems come with analysis software packages with varying levels of detailed descriptions of how the different preprocessing tools are implemented. In addition, several freely available software packages that run on MATLAB/Python/R platforms, or commercial data analysis packages offer alternative implementations of data analysis tools. In addition, custom-written software can be used. The software that has been used for the preprocessing and subsequent analysis must be indicated (including the version). In-house software should be described in explicit detail with reference to the peer-reviewed or pre-print materials. The source code should be publicly released and access links should be provided (e.g., GitHub or another readily accessible internet-based location).
4.2. Defining workflows
Preprocessing is a crucial step in MEEG signal analysis as data can be typically distorted due to various factors. The sequence of steps in the preprocessing pipeline and their order influences the data to be used for subsequent analysis. The workflow, therefore, has to be described step-by-step and with such a level of detail that it could be exactly reproduced by another researcher, this is essential as changes in worflows, even small, can lead to large charges (Robbins et al. 2020). For most studies, recommended steps after general visual data inspection include: 1) Identification and removal of electrodes/sensors with poor signal quality i.e. identification of bad channels. It is essential to clearly describe the methodology and the criteria used, particularly if interpolation is used. 2) Artifact identification and removal. State the method and criteria used to identify artifacts. If a tool is used to automate this step, details on its implementation and parameters used should be provided. 3) Detrending (when and if appropriate) 4) Downsampling (if performed). 5) Digital low- and high-pass filtering with filter-type characteristics (IIR/FIR, type of filter [e.g., Butterworth, Chebyshev etc], cut-off frequency, roll-off/order, causal, zero-phase etc.). 6) Data segmentation (if performed). 7) Additional identification/elimination of physiological artifacts (blinks, cardiac activity, muscle activity etc.). 8) Baseline correction (when, and if, appropriate). 8) Re-referencing for EEG (e.g., earlobe/mastoid-reference, common-average reference, bipolar) and expression of the data in another form (e.g. surface Laplacian; when and if desired).
The steps and sequence described above are appropriate for most basic analyses of data. That said, for specific analyses, or due to specific data characteristics, the order of processing may vary for scientific reasons. For example, data segmentation could occur at different points in the pipeline, depending in part on the specific artifact removal methods used. Note, however, that filtering should be performed before data segmentation to avoid edge effects, or alternatively sufficient data padding should be used. Data re-referencing could also theoretically be performed at various points in the pipeline, but it is important to note that re-referencing can introduce a spatial spread of artifacts. The committee recognizes that investigators require a pipeline where the order of steps is taken for specific reasons, and hence we are not prescriptive about a particular order of data analysis. That said, for each study, the order of the steps in the preprocessing pipeline should be motivated and made explicit, so that other investigators can replicate the study.
Visual inspection of the spatiotemporal structure in the signals after each step is recommended and, if needed, remaining segments of poor data quality should be marked and excluded from further analysis. When such epochs are additionally rejected, a record should be provided such that the same analysis could be reproduced from the raw data. Ideally, storing it in samples relative to the onset of the data record, would be desirable to avoid the potential ambiguity which can arise when reporting more or less arbitrary ordinal epoch numbers. During preprocessing, topographic maps of the distribution of the means and variances of scalp voltages (for EEG) and magnetic fields (for MEG) can serve as an additional tool for spotting channels with poor data quality that might escape detection in waveform displays (Michel et al., 2009).
4.3. Artifacts and filtering
Artifacts from many different sources can contaminate MEEG data and must be identified and/or removed. Artifacts can be of non-physiological (bad electrode contact, power line noise, flat MEG or EEG channel etc.) or physiological (pulse, muscle activity, sweating, movement, ocular blinks etc.) origin. The data should first be visually inspected to assess what types of artifact are actually present in the data. This evaluation should not be biased by the knowledge of the experimental conditions. Subsequently, established artifact identification/removal pipelines can be run, or an alternative motivated cleaning procedure can be implemented. Artifacts can be dealt with in different ways, from simply removing artifact-contaminated segments or channels from the data, to separating signal from noise using e.g. linear projection/spatial filtering techniques.
If automatic artifact detection methods are used, they should be followed up by visual inspection of the data. Any operations performed on the data (see Section 4.1 workflow) should therefore be described, specifying the parameters of the algorithm used. It is recommended to describe in detail the type of detrending performed and the algorithm order (e.g., linear 1st order, piecewise, etc). When automatic artifact rejection/correction is performed, which method was used and what was the range of parameters (e.g., EEG data with a range larger than 75 microV, epoch rejected based on 3 standard deviations from the mean kurtosis). Similarly for channel interpolation, it is essential to specify the interpolation method and additional parameters (e.g., trilinear, spline order). For example, when independent component analysis (ICA, Brown et al., 2001, Jung et al., 2001, Onton et al., 2006) is used, describe the algorithm and parameters used, including the number of ICs that were obtained. If artifacts are rejected using ICA or other signal space separation methods, it is important to report how these were identified and how back projection was performed. For instance, ICA can be performed in combination with a high-pass filter, the back projected data without the artefact component can then be obtained with or without that filter. Such level of details are necessary if one wants Readers to reproduce the method used. It is worthwhile to also consider including topographies of components in the Supplementary Materials section of manuscripts (when available). If interactive artifact rejection procedures are used, it is essential to describe what types of features in the MEEG signal were identified and define the criteria used to reject segments of data. This also allows the Reader to reproduce the results, as well as to be able to compare results between studies (see above on reporting visually removed trials, or epochs, for instance). Once artifacts have been removed, the average number of remaining trials per condition should be reported.
In addition to removing artifact-contaminated segments or using ICA as a popular linear projection technique, MEG allows for the application of specialized linear projection techniques, which in some situations can be used in isolation. For example, signal-space projection methods (SSP, Uusitalo & Ilmoniemi, 1997) use “empty room” measurements to estimate the topographic properties of the sensor noise and project it out from recordings containing brain activity. Related tools with a similar purpose include signal space separation (SSS) methods and their temporally extended variants (tSSS, Taulu et al., 2004; Taulu & Simola, 2006) that rely on the geometric separation of brain activity from noise signals in MEG data. SSS methods have been recommended as being superior to SSP (Haumann et al., 2016). The ordering of preprocessing steps for cleaning MEG data is particularly important, due to potential data transformation – for some caveats see Gross et al., 2013.
For both MEG and EEG data, particular attention must be taken to describe temporal filtering, both for data acquisition and post-processing, as this can have dramatic consequences on estimating time-courses and phases (Rousselet, 2012; Widmann et al 2015), with no effect on scalp topography (although possibly shifted) but possible effect on the topography on non-stationary dynamic signals (e.g. components). Some investigators have advocated the use of an acquisition sampling rate that is 4 times above the intended cut-off frequency of the low pass filter (Luck et al., 2014 and latest IFCN guidelines). That said, the roll-off rate/slope of the filter should also be taken into consideration because there will still be some signal that is present above the filter cut-off frequency. Therefore specifying the type and parameters of any applied post-hoc filter and re-computed references (for EEG, EOG and EMG) has to be specified: filter type (high-pass, low-pass, band-pass, band-stop; FIR: e.g., windowed sinc incl. window type and parameters, ParksMcClellan, etc.; IIR: e.g., Butterworth, Chebyshev, etc.), cutoff frequency (including definition: e.g., -3 dB/half-energy, -6 dB/half-amplitude, etc.), filter order (or length), roll-off or transition bandwidth, passband ripple and stopband attenuation, filter delay (zero-phase, linear-phase, non-linear phase) and causality, and direction of computation (one-pass forward/reverse, or two-pass forward and reverse). In the case of two-pass filtering, it must be specified whether reported cutoff frequencies and filter order apply to the one-pass or the final two-pass filter.
Data preprocessing also forms an essential part of multivariate techniques, and can dramatically affect decoding performance (Guggenmos et al., 2018). We recommend to carefully describe the method used, in particular, if noise normalization is performed channel wise (univariate normalization) or for all channels together (multivariate normalization, or whitening). For the latter, the covariance estimation procedure must be specified (based on baseline, epochs, or for each time point) as its strong impact on results (Engemann & Gramfort, 2015) can hinder any attempt to reproduce the analyses.
4.4. Re-referencing
EEG is a differential measure and in non-clinical EEG is usually recorded relative to a fixed reference (in contrast to clinical practice, which usually uses bipolar montages). While EEG is always recorded relative to some reference, it can later be re-referenced by subtracting the values of another channel or weighted sum of channels from all channels. The need for re-referencing depends on the goals of the analysis and EEG measures used (e.g., common average reference, see below) and can be beneficial for evaluation of connectivity and for source modelling. However, note that, independently of the actual re-referencing scheme, sensor level interpretation of connectivity is invariably confounded by spatial leakage of source signals (Schoffelen & Gross, 2009). Re-referencing does not change the contours of the overall scalp topography since relative amplitude differences are maintained. This can, however, cause issues when working on single channels or clusters, because amplitudes do change locally with referencing (Hari & Puce, 2017). Specifically, the shape of the recorded waveforms at specific electrodes can be altered and this will also affect the degree of distortion of waveforms by artifacts. Hence, when comparing across experiments, the references used should be taken into account, and if unusual, the reference choice should be justified. For EEG, the channel(s) or method used for re-referencing must be specified. MEG is essentially reference free, but some systems may allow for “re-referencing” of the signals recorded close to the brain, using signals recorded at a set of reference coils far away from the brain. If these types of balancing techniques are used, they should be adequately described.
Re-referencing relative to the average of all channels (common average reference, CAR) is most common for high-density recordings as the first step in current practice. The main assumption behind the CAR is that the summed potentials from electrodes spaced evenly across the entire head should be zero (Bertrand et al., 1985, Yao, 2017). Although it is generally admitted that this is a good approximation for EEG data sets of 128 sensors or more (Srinivasan et al., 1998; Nunez & Srinivasan, 2006), the effect of re-referencing to a CAR has been found to be of no close relation to the electrode density. The sum of the potential is mainly affected by the coverage area and the neural source activating orientation (Hu et al., 2018a). For low density recordings and ROI-based analyses in sensor space, there is a serious risk of violating the assumptions for the average reference and the possibility of introducing shifts in potentials (Hari & Puce, 2017) and thus CAR should be avoided in low-density recordings (<128 channels).
An alternative to the CAR approach is the “infinite reference” one, also known as Reference Electrode Standardization Technique (REST and regularized REST) (Yao, 2001). Both the CAR and REST have been shown to be the extremes of a family of Bayesian reference estimators (Hu et al., 2018b). REST utilizes the prior that EEG signals are correlated across electrodes due to volume conduction, while CAR takes the prior that EEG signals are independent over electrodes (for reviews see Yao et al., 2019; Hu et al., 2019). If the focus of the data analysis is on source space inference (see Section 4.6), re-referencing is, in theory, not necessary but may be useful for comparisons to existing literature. Of note, any linear transform applied to the data (e.g. CAR) should also be applied to the forward matrix used for source space analysis. Such important details are generally taken care of by software tools in the field (and some require data to be in CAR form), but it is worthwhile ensuring that this is done. Finally, it should also be noted that there are so-called “reference-free” methods, the most common one being the current source density (CSD) transformation, that usually relies on the spatial Laplacian of the scalp potential i.e. the second spatial derivative of the scalp voltage topography (Tenke & Kayser, 2005). Such techniques attempt to compensate, in EEG, for the signal smoothing due to the low electrical conductivity of the scalp and skull. When this is used, the software and parameter settings (interpolation method at the channel level and algorithm of the transform) must be specified.
4.5. Spectral and time-frequency analysis
A common approach for the analysis of MEEG data is to examine the data in terms of its frequency content, and these analyses are applicable for both task-related as well as resting state designs. One important caveat for these types of analyses is that the highest frequencies that could occur in the data be first considered. The selected data acquisition rate must be at least 2 times (Nyquist theorem) the highest frequency in the data, but is often higher because of the filter roll-off (see Section 4.3) – underscoring the importance of planning all data analyses prior to data acquisition, ideally during the design of the study. Similarly, the lowest frequencies of interest should also be considered, as in this case an adequate pre-stimulus baseline should be specified for evoked MEEG data i.e. the baseline duration should be equal to at least 3 cycles of the slowest frequency to be examined (Cohen, 2014).
In task-related designs, MEEG activity can be classified as evoked (i.e., be phase-locked to task events/stimulus presentation) or induced (i.e., related to the event, but not exactly phase-locked to it). Hence, it is important to specify what type of activity is being studied. The domain in which the analysis proceeds (time and frequency or frequency alone) should be specified, as should the spectral decomposition method used (see below), and whether the data are expressed in sensor or source space. These methods can be the precursor to the assessment of functional connectivity (see Section 4.6).
The spectral decomposition algorithm, as well as parameters used, should be specified in sufficient detail since these crucially affect the outcome. Therefore, depending on the decomposition method used (e.g., wavelet convolution, Fourier decomposition, Hilbert transformation of bandpass-filtered signals, or parametric spectral estimation), one should describe the type of wavelet (including the tuning parameters), the exact frequency or time-frequency parameters (frequency and time resolutions), exact frequency bands, number of data points, zero padding, windowing (e.g., a Hann or Hanning window), and spectral smoothing (Cohen, 2018). It is relevant to note that the required frequency resolution is defined as the minimum frequency interval that two distinct underlying oscillatory components need to have in order to be dissociated in the analysis (Bloomfield, 2004; Boashash, 2003). This should not be mistaken with the increments at which the frequency values are reported (e.g., when smoothing or oversampling is used in the analyses). When using overlapping windows (e.g., in Welch’s method) or using Multi-taper windows for robust estimation, the potential spectral smoothing may lead to closely spaced narrow frequency bands to blend. This should be carefully considered and reported.
4.6. Source modelling
MEEG data are recorded from outside the head. Source modelling is an attempt to explain the spatio-temporal pattern of the recorded data in sensor space as resulting from the activity of specific neural sources within the brain (in source space), a process known as solving the inverse problem. Since there is no unique solution to the inverse problem (i.e. it is mathematically ill-posed), additional assumptions are needed to constrain the solution. Source modelling requires a forward model, which models the sensor level distribution of the EEG potential or MEG magnetic field for a (set of) known source(s), modelling the effect of the tissues in the head on the propagation of activity to MEEG sensors. Forward and inverse modelling require a volume conduction model of the head and a source model, both of which can crucially influence the accuracy and reliability of the results (Baillet et al., 2001; Michel & He, 2018). Practically, the forward model (or lead field matrix) describes the magnetic field or potential distributions in sensor space that result from a predefined set of (unit amplitude) sources. The sources are typically defined either in a volumetric grid, or on a cortically constrained sheet. Information from the forward model is then used to estimate the solution of the inverse problem, in which the measured MEEG signals are attributed to active sources within the brain. It is important to note that source modelling procedures essentially provide approximations of the inverse solution as solved under very specific assumptions or constraints.
In addition to the MEEG data itself, forward and inverse modelling requires a specification of the spatial locations of the sensors relative to the head (Section 3.2), a specification of the candidate source locations, the source model, and geometric data that are used as a volume conduction model of the head, e.g., a spherical head model, or a more anatomically realistic model, based on an individual anatomical MRI of the entire head (i.e. including the scalp and face). Note that this may have implications for subject privacy when sharing data (see Section 7.2). The procedure used to coregister the locations of measurement sensors and fiducials with geometric data must be described (see Section 2.1 for definitions; Section 3.2 for sensor digitization methods). If using anatomical MRI data, it should be made clear if a normalized anatomical MRI volume such as the MNI152 template, or individual participant MRIs have been used for data analysis. If individual MRIs have been used, the data acquisition parameters should be described.
It is essential that all details of the head model and the source model are given. The numerical method used for the forward model (e.g., boundary element modelling (BEM), finite element modelling (FEM)) must be reported, and the values of electrical conductivity of the different tissues that were used in the calculations must be specified. This is less of a problem for MEG where magnetic fields are not greatly distorted by passing through different tissue types (Baillet, 2017). The procedure for the segmentation of the anatomical MRI into the different tissue types should be described. For the source model, the number of dipole locations should be reported, as well as their average positions. Moreover, it should be specified how the source model was constructed, whether it describes a volumetric 3D-grid, or a cortically constrained mesh. When using cortically constrained (surface-based or volumetric) source models, these should ideally be based on an individual MRI of the participant’s head, especially in clinical studies where brain lesions or malformations may be involved, or in pediatric studies where the status of the fontanelles can vary across individuals of the same young age. That said, it has been argued that in certain clinical settings, approximate head models might be adequate, although their limitations should be explicitly acknowledged (Valdés-Hernández et al., 2009). The source localization method (e.g., equivalent current dipole fitting, distributed model, dipole scanning), software and its version (e.g., BESA, Brainstorm (Tadel et al., 2011), Fieldtrip (Oostenveld et al., 2011), EEGLAB (Delorme & Makeig, 2004), LORETA, MNE (Gramfort et al., 2013), Nutmeg (Dalal et al., 2004), SPM (Litvak et al., 2011), etc.) must be reported, with inclusion of parameters used (e.g., the regularization parameter) and appropriate reference to the technical paper describing the method in detail. Finally, it should be noted that the original mixing from the neural sources to the scalp/sensors signals cannot be completely undone with even perfect source reconstruction, and this is specifically an important confounder for connectivity analyses (Schoffelen & Gross, 2009, Palva et al., 2018, Pascual-Marqui et al., 2018).
4.7. Connectivity analysis
We refer here to connectivity analyses as any method that aims to detect the coupling between two or more channels or sources, and re-emphasise that the distinction between functional (correlational) and effective (causal) connectivity should be respected (Friston 1994). It is also important to report and justify the use of either sensor, or source space for the calculation of derived metrics of coupling (e.g., network measures such as centrality or complexity).
4.7.1. Making Networks
Networks are typically derived in one of two ways: data driven (e.g. clustering of correlations, ICA) or model driven. For MEEG, temporal ICA is typically used to partition the data into separate networks of maximally independent temporal dynamics (Onton & Makeig, 2006, Eichele et al., 2011) from which metrics are derived. For anatomically/model driven networks, particular attention should be given to the parcellation scheme, explaining how this was performed (see e.g. Douw et al., 2017). Recent results have also shown strong differences for connectivity computed in subject spaces vs. template space (Farahibozorg et al., 2018, Mahjoory et al., 2017) and choices must be explained.
4.7.2. Sensor vs. Source connectivity
While the committee agrees that statistical metrics of dependency can be obtained at the channel level, it should be clear that these are not per se measures of neural connectivity (Haufe et al., 2012). The latter can only be obtained by an inferential process that compensates for volume conduction and spurious connections due to unobserved common sources or cascade effects. In spite of that, dependency measures can be useful for e.g., biomarking. Connectivity from ICA falls in between these two approaches, as ICA acts as a spatial filter separating out neural sources (see e.g. Brookes et al., 2012) but does not reconstruct them per se, nor accounts volume conduction, common sources, etc. The possible insight into brain function derived from these measures should be critically discussed. This is particularly important since the interpretation of MEEG-based connectivity metrics may be confounded by aspects of the data that do not directly reflect true neural events (Schoffelen & Gross J, 2009; Valdes Sosa et al., 2011). Inference about connectivity between neural masses can only be performed with dependency measures at the source level and correct inferential procedures. For potential issues in dealing with connectivity analyses across channels versus sources, see Lai et al., 2018.
4.7.3. Computing metrics
We refer the reader to recent general references on connectivity measures (Bastos & Schoffelen, 2016; O’Neill et al., 2018; He et al., 2019).
Special care must be taken when describing the metric used. Epoch length must be reported as it influences greatly connectivity values especially considering sensor vs source space (Fraschini et al., 2016) and if dynamic connectivity is computed, measures must be described by including temporal parameters (window size, overlap, wavelet frequency, etc – see Tewarie et al., 2019 for an overview). When computing measures of data-driven spectral coherence or synchrony (Halliday et al., 1995) the following aspects should be considered and reported: the exact formulation (or reference), whether the measure has been debiased, any subtraction or normalisation with respect to an experimental condition or a mathematical criterion. When using multivariate measures (either data-driven or model-based) such as partial coherence and multiple coherence, all of the variables used must be described. Importantly, it must be described which variables with respect to which, the data are partialised, marginalised, or conditioned, or orthogonalized (e.g. Brookes et al., 2012, Colclough et al., 2015). In case of Auto-Regressive (AR)-based multivariate modelling (e.g., in the Partial Directed Coherence group of measures; Baccala & Sameshima, 2001), the exact model parameters (number of variables, data points and window lengths, as well as the estimation methods and fitting criteria) should be reported.
Table 3. Data pre-processing and processing checking-sheet
Workflow |
– Indicate in detail the exact order in which preprocessing steps took place |
Software |
– Which software and version was/were used for preprocessing and processing, as well as the analysis platform
– In-house code, should be shared/made public |
Generic preprocessing |
– Indicate any downsampling of the data
– If electrodes/sensors were removed, which identification method was used, which ones were deleted, if missing channel interpolation is performed indicate which method
– Specify detrending method (typically polynomial order) for baseline correction
– Specify noise normalization method (typically used in multivariate analyses)
– If data segmentation is performed, indicate the number of epochs per subject per condition
– Indicate the spectral decomposition algorithm and parameters, and if applied before/after segmentation |
Detection/rejection/correction of artifacts |
– Indicate what types of artifact are present in the data
– For automatic artifact detection, describe algorithms used and their respective parameters (e.g., amplitude thresholds)
– For manual detection, indicate the criteria used with as much detail as needed for reproducibility
– Indicate if trials with artifacts were rejected or corrected. If using correction, indicate method(s) and parameters
– If trials/segments of data with artifacts have been removed, indicate the average number of remaining trials per condition across participants (include minimum and maximum number of trials across participants)
– For resting state data, specify the length of time of the artifact-free data |
Correction of artifacts using BSS/ICA |
– Indicate how many total components were generated, what type of artifact was identified and how, and how many components were removed (on average across participants)
– Display example topographies of the ICs that were removed |
Filtering |
– filter type (high-pass, low-pass, band-pass, band-stop, FIR, IIR)
– filter parameters: cutoff frequency (including definition: e.g., -3 dB/half-energy, -6 dB/half-amplitude, etc.), filter order (or length), roll-off or transition bandwidth, passband ripple and stopband attenuation, filter delay (zero-phase, linear-phase, non-linear phase)
– causality and direction of computation (one-pass forward/reverse, or two-pass forward and reverse) |
Re-referencing (for EEG) |
– Report the digital reference and how this was computed
– Justify choice of the re-reference scheme |
Source modelling |
– Method of co-registration of measurement sensors to anatomical MRI scan of the participant’s head or MRI template (for EEG in particular)?
– Volume conductor model (e.g., BEM/FEM) and tissue conductivity values (for EEG), procedure for anatomical image segmentation?
– Source model details (e.g., dipole, distributed, dipole scanning, volumetric or surface based), number of source points and their average distance?
– Report parameters used for source estimation (i.e., regularization of the data covariance matrix; constraints used for source model) |
Connectivity |
– Sensor or source space?
– Anatomical parcellation scheme (source space)
– Detail exact variables that have been analysed (which of the data was partialised, marginalised, or conditioned)
– For model based approach, indicate model parameters
– Specify metrics of coupling |
