This blog provides:

A. information on the process for creating the COBIDAS-MEEG white paper (10.31219/osf.io/a8dhx) for the Organization for Human Brain Mapping [OHBM] in 2017-2018;

B. the contents of the COBIDAS-MEEG white paper itself appear in the form of blog posts allowing comments & feedback to be provided. Each section of the document has it’s own blog post. Some sections also end with a Table, that is meant to be a checklist for items to be included in writing up or submitting a manuscript. The sections are:

  1. The process
  2. Approach and Scope
  3. Experimental Design
  4. Data Acquisition
  5. Data Preprocessing
  6. Biophysical and Statistical Analyses
  7. Result reporting
  8. Replicability and Data sharing




Stats on COBIDAS MEEG blog at 12 noon 23 Nov 2018

We are delighted with the traffic we have received on this blog [which will be taking comments until the end of November – 1 more week or so to go… so if you have not given us feedback on the document, now would be a good time!]

The whole document in preprint form can also be viewed on: https://osf.io/a8dhx

We have had a truly global participation for viewing this blog etc., as shown by the map & also table below, where we have had more that 1,000 unique viewers:



As you can see most of the traffic has come from the USA/Canada & Europe, but participation has been extensive.

Thank you for taking the time & trouble to look at the blog & for also providing feedback to us as well.

Aina & Cyril




June 2017: At the General Assembly [formerly know as the ‘Town Hall’ meeting] of the OHBM in 2017 in Vancouver the issue of a COBIDAS white paper for EEG & MEG was raised, prompting the then OHBM Council Chair [Alan Evans] to kickstart the process by asking the questioner [Aina Puce] to ‘do something about it’. During the remaining time in Vancouver Puce had discussions with a number of MEEGers, inviting Cyril Pernet to be a Co-Chair of a Committee that would draft a white paper on ‘Best Practices in Data Analysis & Sharing in Neuroimaging using MEEG’.

August 2017: Puce & Pernet put out a call to OHBM members to help draft the COBIDAS-MEEG white paper & 113 OHBM volunteers answered the call! Puce & Pernet also conducted fruitful discussions with the Executive of the International Federation for Clinical Neurophysiology [IFCN] about the proposed white paper. [OHBM members Pedro Valdes-Sosa & Aina Puce had served as members on IFCN committees producing guidelines on various aspects of performing MEG and EEG, so we wanted to ensure that we had a consensus across all of our fields about existing standards  for MEEG studies.] Additionally, Cyril Pernet was co-ordinating the drafting of the BIDS-EEG standard & was also monitoring the work performed on the BIDS-MEG standard as well – again, so that we have consistency & parity across these various standards.

September 2017: Puce & Pernet select the COBIDAS-MEEG committee – it was a difficult task as so many individuals with wonderful skills in EEG, MEG or both responded to the call. The final committee was chosen to try to balance varied expertise, age, gender & geography [nationality & current place of residence].

October 2017: Invitations to join the COBIDAS-MEEG committee are sent out & acceptances logged. The individuals who ultimately drafted & contributed to the COBIDAS-MEEG white paper were:

Marta Garrido

Alexandre Gramfort

Natasha Maurits

Christoph Michel

Elizabeth Pang

Riitta Salmelin

Jan Mathijs Schoffelen

Pedro A. Valdes-Sosa

Aina Puce & Cyril Pernet (Co-Chairs)

November-December 2017: Pernet delegates subgroups of committee members to write sections of the first draft.

January-March 2018: Puce & Pernet review & edit first draft. Committee members make further edits to document.

March-April 2018: The first draft is put out for comment to the ~100 OHBM volunteers, as well as the IFCN executive.

May 2018: Puce & Pernet edit document based on feedback; Pernet presents on COBIDAS-MEEG at the IFCN Annual Scientific Meeting in Washington DC.

June 2018: The completed first draft is sent to OHBM Council; Puce presents a progress report at the General Assembly of OHBM 2018 in Singapore.


July 2018: Puce presents on COBIDAS-MEEG process & content at the CuttingEEG meeting in Paris. The COBIDAS-MEEG Committee reviews the document once more.



August 2018-November 2018: The COBIDAS-MEEG white paper is put out for comment to the OHBM Membership via a blog on word press, members are informed by the OHBM Secretariat on how to access the document; relevant journal editors are also provided with a copy of the document with an invitation to comment.

? December 2018-January 2019 ?: Puce & Pernet incorporate feedback into from OHBM Membership to create second draft of white paper; COBIDAS-MEEG committee makes edits; Second draft of white paper shared with IFCN Executive – with invitation for comments & endorsement.

? January 2019 ?: Submission of COBIDAS-MEEG pre-print; COBIDAS-MEEG made available on OHBM website; weigh options for submissions to neuroscience/neuroimaging journals.

? February 2019 ?: Submission to peer-reviewed journal

June 2019: Presentation of COBIDAS-MEEG progress report at General Assembly for OHBM 2019 in Rome.


At the beginning of August, a general email from the OHBM Secretariat was sent out letting OHBM members know that they can comment on the document. We have posted on Twitter as well.

1. Approach and Scope

Over the last decade or so, more and more discussion has been focused on concerns regarding reproducibility of scientific findings and a potential lack of transparency in data analysis – prompted in large part by the Open Science Collaboration (2015), which could only replicate 39 out of 100 previously published psychological studies. Since then, there has been an ongoing discussion about these issues in the wider scientific community, including the neuroimaging field. There has also been a push to implement the practice of ‘open science’, which among other things promotes: (1) transparency in reporting data acquisition and analysis parameters, (2) sharing analysis code and the data itself with other scientists, as well as (3) implementing an open peer review of scientific manuscripts. Within the Organization for Human Brain Mapping (OHBM) community, there have been discussions both at Council level as well as at the grassroots level regarding how the neuroimaging community can improve its standards for performing and reporting research studies. In June 2014, OHBM Council created a “Statement on Neuroimaging Research and Data Integrity”, and in a practical move created a Committee on Best Practices in Data Analysis and Sharing (COBIDAS). The COBIDAS committee’s brief was to create a white paper based on best practices in MRI-based data analysis and sharing in the neuroimaging community. The COBIDAS MRI report was completed and made available to the OHBM community on its website, as well as a preprint that was submitted in 2016 and published in 2017 (Nichols et al., 2016 bioRxiv; Nichols et al., 2017 Nature Neuroscience).


The approach taken in this document parallels that for COBIDAS MRI. Our aim is to generate a set of best practice guidelines for research methods, data analysis and data sharing in the MEEG discipline. The tables with recommendations and checklists may seem very detailed, but we recommend these as essential details that should be reported in any MEEG study, in order to ensure its reproducibility/replicability. The replication of MEEG studies is currently a challenge, as many reported studies continue to omit important methodological details. These details should also assist those who are new to the area in considering what is important in designing an experiment, collecting and analysing data, as well as reporting the study. Additionally, we hope that the COBIDAS MEEG document will be useful for reviewers of scientific manuscripts employing MEEG – in the same way that the COBIDAS MRI document has been used by the MRI community.


The COBIDAS MEEG document focuses on best practices in non-invasively recorded MEG and EEG data. The practices are broken down into six components for reporting: (1) experimental design, (2) data acquisition, (3) preprocessing and processing, (4) biophysical and statistical modelling, (5) results, as well as (6) data sharing and reproducibility.

Similar to the COBIDAS MRI document, we also make a clear distinction between reproducibility and replicability (see definitions here https://arxiv.org/abs/1802.03311). Reproducibility relates to working with (possibly) the same data and analysis methods to reproduce the same final observations/results. Replicability relates to using different data (and potentially different methods) to demonstrate similar findings across laboratories. Replication internally, i.e., across experiments within the laboratory, is a practice that might be considered by investigators.


Browse and, please, leave comments on the different ‘chapters’ of the COBIDAS MEEG. This is ‘our’ document, to all OHBM members and beyond.

  1. Approach and Scope
  2. Experimental Design
  3. Data Acquisition
  4. Data Preprocessing
  5. Biophysical and Statistical Analyses
  6. Result reporting
  7. Replicability and Data sharing

2. Experimental Design

With respect to experimental design, the goal of replicable research requires the reporting of how the participants were screened and selected, as well as what type of experimental paradigm was employed. This enables a critical reader to evaluate whether the findings will generalize to other populations, for example. In case there was an experimental manipulation that included a task, a specification of the instructions given to the participant is very important. All pertinent information regarding the experiment and the recording environment (cf. Section 3) should be noted to facilitate the efforts of others wishing to replicate the work (e.g., stimuli, timing, apparatus, sessions, runs, trial numbers, conditions, randomization or other condition-ordering procedures, periods of rest or other intervals etc.) Ideally, the scripts and stimuli used (when not resting state data collection) are shared along with the manuscript thus making exact experimental reproduction possible.

Lexicon of MEEG design

Below is a list of MEEG terminology commonly used to describe stimulation and task parameters and protocols. Although we recognize that some wording is used more often (e.g., a block versus a run and a trial versus an event), the list follows the terminology used by the Brain Imaging Data Structure (BIDS – http://bids.neuroimaging.io/) for MEG (Galan et al., 2017), EEG (Pernet et al., in prep) and iEEG (Hermes et al., in prep).

Session. A logical grouping of neuroimaging and behavioural data consistent across participants. A session includes the time involved in completing all experimental tasks. This begins when a participant enters the research environment until he/she leaves it. This would typically start with informed consent procedures followed by participant preparation (i.e., electrode placement and impedance check for EEG; fiducial and other sensor placement for MEG) and ends when the electrodes are removed (for EEG) or the participant exits the MEG room, but can also include a number of pre- or post- MEEG observations and measurements (e.g., structural MRI, additional behavioural or clinical testing, questionnaires), even on different days. Defining multiple sessions is appropriate when several identical or similar data acquisitions are planned and performed on all (or most) participants, often in the case of some intervention between sessions (e.g., training or therapeutics) or for longitudinal studies.

Run. An uninterrupted period of continuous data acquisition without operator involvement. Note that continuous data need not be saved continuously; in some paradigms, especially with long inter-trial intervals, only a segment of the data (before and after the stimulus of interest) is saved. In the MEEG literature, this is also sometimes referred to as a block.

Event. An isolated occurrence of a stimulus being presented, or a response being made. It is essential to have exact timing information in addition to the identity of the events, synchronized to the MEEG signals. For this, a digital trigger channel with specific marker values, or a text file with marker values and timing information can be used.

Trial. A period of time that includes a sequence of one or more events with a prescribed order and timing, which is the basic, repeating element of an experiment. For example, a trial may consist of a cue followed after some time by a stimulus, followed by a response, followed by feedback. Trials of the same type belong to the same condition. Critical events within trials are usually represented as “triggers” stored in the MEEG data file, or documented in a marker file.

Epoch. In the MEEG literature, the term epoch designates the outcome of a data segmentation process. Typically, epochs in event-related designs (for analysis of event related potentials or event related spectral perturbations) are time locked to a particular event (such as a stimulus or a response). Epochs can also include an entire trial, made up of multiple events, if the data analysis plan calls for it.

Sensors. Sensors are the physical objects or transducers that are used to perform the analogue recording, i.e., EEG electrodes and MEG magnetometers/gradiometers. Sensors are connected to amplifiers, which not only amplify, but also filter the MEEG activity.

Channels. Channels refer to the digital signals that have been recorded by the amplifiers. It is thus important to distinguish them from sensors.

Fiducials. Fiducials are “objects” with a well-defined location, which are used to facilitate the localization and co-registration of sensors with other geometric data (e.g., the participant’s own structural MRI image, a structural MRI template or a spherical model). Some examples are vitamin-E markers, reflective disks, felt-tip marker dots placed on the face, or sometimes even the EEG electrodes themselves etc. Fiducials are typically placed at a known location relative to, or overlying, anatomical landmarks.

Anatomical Landmarks. These are well-known, easily identifiable physical locations on the head (e.g., nasion at the bridge of the nose; inion at the bony protrusion on the midline occipital scalp) that have been acknowledged to be of practical use in the field. Fiducials can be placed at anatomical landmarks to aid localization of sensors relative to geometric data.

Sensor space. Sensor space refers to a representation of the MEEG data at the level of the original sensors, where each of the signals maps onto the spatial location of one of the sensors.

Source space. Source space refers to MEEG data expressed at the level of potential neural sources that gave rise to the measured signals. Each signal maps onto a spatial location that is readily interpretable in relation to individual or template-based brain anatomy.

Statistical power

There is currently no agreed-upon single method for computing statistical power for MEEG data. The committee recommendations are: 1 – that all decisions related to computing statistical power be made prior to starting the experiment; 2 – to define from the literature the main data feature(s) of interest; and 3 – to estimate the minimal effect size of interest to determine power. Minimal effect size should be determined given prior estimates from the literature and/or pilot data. It is, however, important to keep in mind that errors in the effect size calculation and subsequent power calculation can be introduced by small sample sizes in pilot data collections (e.g., see Albers & Lakens, 2018).

Statistical power determines the researcher’s ability to observe an experimental effect. Under the assumption that this effect exists, and along with the quality of the experiment, statistical power thus determines the replicability of a study and is, therefore, an important factor to consider. For instance, in order to observe a behavioural effect in terms of response times, an estimated number of at least 1600 observations (e.g., 40 participants with 40 trials each for a given condition) is needed when using a mixed model analysis approach (Brysbaert & Stevens, 2018). As the neural effects in MEEG studies likely have a lower signal-to-noise ratio than response time effects, and that some trials/epochs will be rejected due to artifacts thus diminishing the number of trials/epochs included in statistical analyses, there is a need for more events and/or participants than current common practice. However, the balance between the number of trials and the number of participants depends on the MEEG feature of interest and the experimental design (within vs between participants Boudewyn et al., 2017).


The population from which the participants are sampled is critical to any experiment, not just to those with clinical samples. The method of participant selection (Martínez-Mesa et al., 2016), the population from which they were selected (e.g., laboratory members, university undergraduates, hospital, general population), recruitment method (e.g., direct mailing, advertisements), specific inclusion and exclusion criteria and compensation (financial or other type) should be described clearly. Any specific sampling strategies that constrain inclusion to a particular group should be reported.

One should take special care with defining a “typical” versus “healthy” sample. Screening for lifetime neurological or psychiatric illness (e.g., as opposed to “current” ones) could have unintended consequences. For example, in older individuals this could exclude up to 50% of the population (Kessler, 2005) and this restriction could induce a bias towards a “super-healthy” participant sample, thus limiting the generalization to the population as a whole. The use of inclusive language when recruiting participants is also recommended (e.g., using gender-neutral pronouns in recruiting materials).

Participant demographic information such as age, gender, handedness and education (total years of education) and highest qualification should be included in the experimental description at a minimum, as these variables have been associated with changes in brain structure and function (BRAINS, 2017). Medications that affect the central nervous system should be reported (unless these were part of the exclusion criteria). Additional ancillary investigations (e.g., questionnaires, psychological assessments etc.) should also be reported. Finally, it is important to include information related to obtaining written informed consent for adult participants (or parental informed consent/informed assent in minors), with a specific mention of the institutional review board that approved the study. 

Task or stimulation parameters

It may be helpful to describe the characteristics of the overall testing environment, task-related instructions and number of experimenters. In task-free recordings of resting state activity, while there are no stimulation parameters, it is important to report the instructions given to the participant. As a minimum, whether the eyes were open or closed needs to be noted, and for studies with eyes open whether there was a fixation point or not. Participant position (e.g., seated or lying down) should also be noted.

If there is a task with stimuli, stimulus properties need to be described in sufficient detail to allow replication, and this includes standardization procedures used in stimulus creation. The means of producing the stimuli should be reported: For example, stimuli from existing stimulus sets or databases are used, the name of the database (or subset of stimuli used) should be provided. If stimuli are created or manipulated, specific software or algorithms need to be identified.

It is important to note that the high time resolution of MEEG signals makes them highly sensitive to stimulus properties and stimulus/task timing.  For visual presentations, stimulus size in degrees of visual angle, viewing distance, clarity (i.e., visual contrast, intensity, etc.), colour, site of stimulation (i.e., monocular vs binocular, position in the visual field, etc.), as well as the display device and method of projection (including refresh rate or response time of the monitors) should be reported. Differences in intensity or contrast between different stimulus conditions should be noted. For auditory presentations, stimulus properties (e.g., frequency content, duration, onset/offset envelope, etc.), intensity, ear of stimulation, and the type, manufacturer and model of the delivery device (e.g., ear inserts, panel speakers, etc.) are important to include. For somatosensory stimulation, stimulus type (e.g., electrical, air puff) and characteristics (e.g., duration, frequency), location on the body with reference to anatomical landmarks, and strength (ideally with respect to some sensory or motor threshold) should be reported. The distance between the site of peripheral stimulation and brain and skin temperature are also important as they will affect response latency independent of the experimental manipulation. For other modalities of stimulation, providing sufficient details regarding stimulus properties, timing and intensity will be critical for replicability. Calibration procedures, including software and hardware used, should also be described. Where relevant, the rationale for selecting a specific parameter (e.g., contrast, harmonic content) should be indicated. If features were determined individually for each participant, the criteria used and the psychophysical method used should be detailed.

For tasks that are self-paced and not explicitly driven by stimuli, e.g., voluntary movements in readiness (Bereitschafts Potential) experiments, the instructions given for each block of experiment and how the task-relevant events (e.g., movement onset or offset) are quantified and recorded need to reported.

For all tasks, it is essential to describe the overall structure and timing of the task including practice sessions, number of trials per condition, the interstimulus (offset to onset) or stimulus-onset-asynchrony (SOA, onset to onset) intervals and any temporal jitter in these intervals between sequential events (whether intended or not), the order of stimulus presentation, feedback or handling of errors, and whether conditions were counterbalanced. Storage of stimulus and response triggers in the datafile should also be mentioned (discussed in more detail in Section 3).  

Behavioural measures collected during an MEEG session

A number of behavioural measures can be acquired during an MEEG experiment.  The most common measures are obtained via a button press on a response pad or keyboard, mouse or joystick; however, many other response types are possible. These can include responses by voice, movements of the hands, fingers, feet, eyes, or specific contractions of muscles (most typically assessed via electromyographic (EMG) recordings). In these latter cases, the positioning of recording electrodes for EMG and data acquisition parameters should be described (see Section 3 and 6, BIDS standards).

Regardless of the actual type of response, it is imperative to describe the exact nature of the response acquisition device, including product name, manufacturer, model numbers, etc., as well as any pertinent recording parameters. Further, the method by which the device interfaces with the MEEG data needs to be described, as well as any modifications to the off-the-shelf product. If devices are built in-house, the components and basic function of the device need to be well described (providing a schematic diagram of the device or a description of the basic circuit might be helpful).

In addition to the response devices, appropriate descriptions of the behavioural response (central measures like mean or median as well as measures of variability) and performance (e.g., response time, accuracy, false alarms, etc.) should be provided in the Results section.

Table 1. Design reporting check-sheet 

Terminology and experimental break down – Number of sessions, runs per session and trials per run and conditions.

– Detail how data were epoched.

Statistical Power – Detail any analysis performed to justify the number of trials / participants.
Participants – Recruitment and selection strategy.

– Inclusion and exclusion criteria.

– Demographics (gender, age, handedness, education, and other relevant).

– Information about written informed consent (or informed assent for pediatric participants) and the name of Institutional Review Board.

Stimulation/task parameters – Characteristics of the overall testing environment and number of experimenters.

– Instructions (Task-related or not)

– Stimulus properties

– Calibration procedures

– Structure and timing of the task (number of trials, ISI/SOA, temporal jitter, order of stimuli/conditions, counterbalance, etc).

Behavioural data – What was collected (motor responses, eye tracking)

– How was it collected (hardware)

– For resting state data, indicate if the participant’s eyes are open or closed


3. Data Acquisition

MEEG device

MEEG studies should report basic information on the type of acquisition system being used (including manufacturer and model), the number of sensors and their spatial layout. For example, for EEG studies spatial layout will most likely correspond to the International 10-20 (Jasper, 1985; Klem et al, 1999), International 10-10 (Chatrian et al., 1985), International 10-5 (Oostenveld and Praamstra, 2001)  or geodesic systems (Tucker, 1993). Additionally, the sensor material should be specified (e.g., Ag/AgCl electrodes) and whether the electrodes are active or passive.

For MEG studies, the type of sensors should also be specified (e.g., planar or axial gradiometers, or magnetometers; cryogenic or room-temperature), as well as the location and type of reference sensors. Means of determining the position of the participant’s head with respect to the MEG sensor array should be reported, and also when this operation was performed (e.g., continuously, or at the start of each session). The type of shielded room (when used) should also be specified.

Additionally, for MEG studies, it is advisable to include “empty room” recordings using the specific experimental set-up as during the experiment (but without the participant present) to characterize any participant-unrelated artifacts. For EEG studies, the calibration procedure should be carried out on the amplifiers prior to each recording session, and if possible calibration information should be stored with the EEG data.

Acquisition parameters

For MEEG studies it is mandatory to specify basic parameters such as acquisition type (continuous, epoched), sampling rate and analogue filter bandwidth (including the parameters of the low pass anti-aliasing filter—an obligatory part of the recording system—as well as high pass filtering). Notch filtering (to eliminate line noise), if used during recording, should also be reported. The inclusion of digitisation resolution (e.g., 16-bit or 24-bit) is also helpful. It should be noted that all MEEG recording systems will use some filter bandpass potentially as a default that may not be altered by the user. The inclusion of parameters related to filter type and roll-offs is essential in some situations (e.g., when discussing the timing of ERP components or spectral components). Note that the filter bandpass may also be adjusted post hoc for analysis, and this should also be reported when describing analysis procedures (see Section 4.3).

For EEG recordings, the reference and ground electrodes used in data acquisition should be specified. Similarly, reference electrode(s) used in data analysis should also be reported (see Section 4.4). For data acquisition, physically linked earlobe/mastoid electrodes should not be used, as they are not actually a neutral reference and make further modelling intractable (see also Katznelson, 1981, Neurophysics of the EEG, 1st ed.). Further, distortions in EEG activity can occur as a result of relative differences in impedances between the two earlobe electrodes. While it has been recommended in various sources that the left earlobe/mastoid be used as acquisition reference, it should be noted that cardiac artifacts could be exaggerated if using a left earlobe/mastoid reference. An alternative would be to use the right earlobe instead.

Sensor position digitization procedures, if performed, should be described. For EEG, the type of approach used, and the manufacturer and model of the device should be specified, as well as the time in relation to the experiment that this procedure was performed. In MEG studies, when determining the position of the head with respect to the sensor array, the locations of EEG, other electrodes, or head localisation coils may be digitized at the same time. If high-resolution structural MRI scans of participants’ heads are acquired for the purposes of source localization, details of MRI scanning protocol, as well as fiducial types, their locations relative to anatomical landmarks, and the native coordinate system, should be described. If less commonly used fiducial positions are adopted, example photographs of fiducial placement might be helpful. Methods for co-registering MEEG sensors and fiducials to individual structural MRI scans or templates (including software name and version) should be reported (see also Sections 2.1 and 4.6).

Skin preparation methods used for electrode application, as well as the electrode material and the conducting gels or saline solutions (if used), including procedures used for measuring electrode impedances, should be described. Note that acceptable levels for electrode impedances will vary relative to the recording amplifier’s input impedance, therefore it is advisable to include a statement on what acceptable electrode impedances are for the type of amplifier being used, as well as what the actual values were (on average, or an upper bound). The time(s) at which impedances were measured during the course of the experiment e.g., start, middle, end, should also be noted. It is advisable to store the impedance measurements digitally, together with the EEG data. Additionally, the active-passive nature of the electrodes and the recording system (which determines the acceptable impedance levels) should be reported.

Additional electrodes may be applied to the scalp/face to measure electro-oculographic (EOG) signals in either EEG or MEG studies. EMG activity may be recorded from the face, hands/arms, limbs or body. For EOG and EMG electrodes, their exact spatial positions should be specified, preferably with reference to well-known anatomical landmarks (e.g., outer canthus of the eye). It should be specified if these data are collected with the same or different settings to the MEEG data.

In MEEG recordings the position of the participant should be clearly documented. Head position is known to affect the strength of different EEG rhythms as it produces displacements of brain compartments and therefore has an appreciable effect on source modelling (Rice et al., 2013). This is likely to be an issue for MEG recordings also, as well as being an additional source of variance in comparison to fMRI data in the same participants where in one session the participant sits upright (in EEG or MEG) and in the other (fMRI) the participant lies supine.

In some clinically based studies, some participants may be studied under sedation or anaesthesia. The anaesthetic agents may affect the MEEG data significantly, hence the agent, dosage and administration method (intravenous, intramuscular, etc) should be reported.

Stimulus presentation and recording of peripheral signals

Information on the type of stimulators (including manufacturer and model) should be provided (see Section 2). If being digitally controlled, the type and version of the software should also be reported. Calibration procedures for stimulators, if applicable, should be described. Similarly, manufacturer and model of devices used for collecting peripheral signals, such as a microphone to record speech output should be reported.

As MEEG methods have a very high temporal resolution, it is also essential to measure and report any time delays between stimulus timing or recording of peripheral signals with respect to the time course of the MEEG signals. For example, a visual or auditory stimulus setup may include a systematic delay from the trigger sent by the stimulus software to the actual arrival of the stimulus at the sensory organs. While a fixed delay is common and easy to fix a posteriori during analysis, randomness in temporal jitter can be highly problematic. Any information that may influence the interpretation of the results, such as stimulus strength or timing, visual angle, microphone placement etc should be reported. For studies involving hyperscanning, a description of the synchronization of multiple data acquisition systems (e.g. EEG-EEG, MEG-EEG, EEG-fMRI) should be provided.

Vendor-specific information and format

When providing acquisition information in a manuscript keep in mind that readers may use a different manufacturer of EEG or MEG device, and thus you should minimize the use of vendor-specific terminology. To provide comprehensive acquisition detail we recommend reporting vendor-specific information in particular regarding hardware parameters, but with generic and agreed terminology (see e.g. the brain imaging data structure). If space constraints are a problem in manuscript preparation, these details could be provided as supplementary material.

Table 2. Data Acquisition check-sheet

Device – MEG or EEG manufacturer, model, sensor specifications

– Details on additional devices used (manufacturer and make) for additional measures (behaviour or other)

Sensor type and spatial layout – MEG: planar/axial gradiometers and/or magnetometers, spatial layout

– Electrodes for EEG, EOG, ECG, EMG, skin conductance (electrode material, passive/active, other)

– EEG spatial layout: 10-20, 10-10 system, Geodesic, other. If not conventional, show map of electrode positions

Participant preparation and test room – Ambient characteristics (and if appropriate, empty room recording for MEG), detail if the recording room was shielded for EEG

– Participant preparation (EEG: skin preparation prior to electrode application, electrode application; MEG: participant degaussing, special clothing)

Impedance measurement – Report impedances for EEG/EOG/ECG/ EMG electrodes, preferably digitally storing them to the datafile, indicate timing of impedance measurement(s) relative to the experiment
Data acquisition parameters – Software system used for acquisition

– Low- and high-pass filter characteristics and sampling frequency

– Continuous versus epoched acquisition?

– For EEG/EOG/ECG/EMG/skin conductance: report reference and ground electrode positions

Sensor position digitization – EEG/EOG: method (magnetic, optical, other), manufacturer and model of the device used

– MEG: monitoring of head position relative to the sensor array, the use of head movement detection coils and their placement

– In both MEG and EEG, report the time of digitization in relation to the experiment, and describe the 3D coordinate system

Synchronization of stimulation devices with MEG and/or EEG amplifiers – Report either accuracy or error in synchronization

– Synchronization between hyperscanning MEG or EEG amplifiers / MRI clock and EEG amplifiers



4. Data Preprocessing

Software-related issues

Many of the available EEG and MEG systems come with analysis software packages with varying levels of detailed descriptions of how the different preprocessing tools are implemented. In addition, several freely available software packages that run on MATLAB/Python/R platforms, or commercial data analysis packages offer alternative implementations of data analysis tools. In addition, custom-written software can be used. The software that has been used for the preprocessing and subsequent analysis must be indicated (including the version). In-house software should be described in explicit detail (or peer-reviewed references or pre-prints with such details cited). The source code should be publicly released and access links should be provided (e.g., GitHub or another readily accessible internet-based location).

Defining workflows

Preprocessing is a crucial step in MEEG signal analysis as data are typically distorted due to various factors. The sequence of analysis steps of the preprocessing pipeline and their order influences the data to be used for subsequent analysis. The workflow, therefore, has to be described step-by-step and with such a level of detail that it could be exactly reproduced by another researcher. For most studies, recommended steps after general visual data inspection include: 1) identification and removal of electrodes/sensors with poor signal quality. It is essential to clearly describe the methodology and the criteria used, particularly if interpolation is used. 2) artifact identification and removal. Similarly state the method and criteria used to identify artefact, and the method used in case correction is performed; 3) detrending (when and if appropriate); 4) digital low- and high-pass digital filtering with filter-type characteristics (IIR/FIR, type of filter [e.g., Butterworth, Chebyshev etc], cut-off frequency, roll-off/order etc.); 5) data segmentation (if performed); 6) additional identification/elimination of artifacts (blinks, cardiac activity etc); 7) baseline correction (when and if appropriate); 8) re-referencing for EEG (e.g., earlobe/mastoid-reference, common-average reference, bipolar, surface Laplacian; when and if desired).

The steps and sequence described above are appropriate for most basic analyses of data. That said, for specifics of analyses, or due to specific data characteristics, the order of processing may vary for scientific reasons. For example, data segmentation could occur at different points in the pipeline, depending in part on the specific artifact removal methods used. Data re-referencing could theoretically be performed at various points in the pipeline also, but it is important to note that re-referencing can spread artifacts. The committee recognizes that investigators require a pipeline where the order of steps is taken for specific reasons, and hence we are not prescriptive about a particular order of data analysis. That said, for each study, the order of the steps in the preprocessing pipeline should be made very clear, so that other investigators can replicate the study.

Visual inspection of the spatiotemporal structure in the signals after each step is recommended and, if needed, remaining epochs with poor data quality should be marked and excluded from further analysis. When such epochs are additionally rejected, a record should be provided such that the same analysis could be reproduced from the raw data (e.g., after preprocessing epochs 10, 25, 45, 60 were additionally excluded showing residual artifacts). During preprocessing, topographic maps of the distribution of the means and variances of scalp voltages (for EEG) and magnetic fields exiting and entering the head (for MEG) might serve as an additional tool for spotting channels with poor data quality that might escape detection in waveform displays (Michel et al., 2009 Electrical Neuroimaging).

Artifacts and filtering

Artifacts from many different sources can contaminate MEEG data and must be identified and/or removed. Artifacts can be of non-physiological (bad electrode contact, power line, etc.) or physiological (pulse, muscle activity, sweating, movement, etc.) origin. The data should first be visually inspected (where the investigator is blind to experimental conditions) to ascertain what types of artifact are actually present in the data. Subsequently, established artifact identification/removal pipelines can be run, or a plan can be constructed for data segments with artifacts to be excluded.

If automatic artifact detection methods are used, they should be followed up by visual inspection of the data. Any operations performed on the data (see Section 4.1 workflow) must be described, specifying the parameters of the algorithm used. It is recommended to describe in detail the type of detrending performed and the algorithm order (e.g., linear 1st order, piecewise, etc). When automatic artifact rejection/correction is performed, which method was used and what was the range of parameters (e.g., EEG data with a range bigger than 75 microV, epoch rejected based on 3 SD deviation from the mean kurtosis). Similarly for channel interpolation, one must specify the method and order (e.g., trilinear, spline nth order). When ICA is used, describe the algorithm and parameters used, including the number of ICs. If artefacts are rejected using ICA, report how these were identified. It is worthwhile to also consider including topographies of components in the Supplementary Materials section of manuscripts. If manual artifact rejection procedures are used, it is essential to describe what types of features in the MEEG signal were identified and define the criteria used to reject segments of data. This is also essential for allowing the reader to reproduce the results, as well as compare results between studies (see above on reporting visually removed trials (epochs) for instance). Once artefacts have been removed, the average number of remaining trials per condition should also be reported.

In MEG specialized procedures for correcting data that contain artifacts can be used, eliminating the need to reject data. These include techniques such as signal-space projection methods (SSP, Uusiltalo & Ilmoniemi, 1997)for that use “empty room” measurements with MEG data to differentiate external sources of interference from brain activity, and signal space separation (SSS) methods and their temporally extended variants (tSSS, Taulu et al., 2004; Taulu & Simola, 2006) that rely on the geometric separation of brain activity from noise signals in MEG data. SSS methods have been recommended as being superior to SSP (Haumann et al., 2016).  The ordering of preprocessing steps for cleaning MEG data is particularly important, due to potential data transformation – for some caveats see Gross et al., 2013.

For both MEG and EEG data, particular attention must be taken to describe filtering, as this can have dramatic consequences on estimating time-courses and phases (Rousselet, 2012; Widmann & Schröger, 2012). Some investigators have advocated the use of a sampling rate that is 4 times above the putative cut-off frequency of the low pass filter used (Luck et al 2014 and latest IFCN guidelines). That said, the roll-off rate/slope of the filter must also be taken into consideration and therefore specified with a description of the type, bands etc. of the filters used. The type and parameters of any applied post-hoc filter and (for EEG, EOG and EMG) re-computed references must be specified as they crucially affect the outputs of waveform or frequency analyses (but not topographies). For frequency and time-frequency analyses, details on the transformation algorithm must similarly be reported.

Data preprocessing also forms an essential part of multivariate techniques, and can dramatically affect decoding performance (Guggenmos et al., 2018). We recommend to carefully describe the method used, in particular, if noise normalization is performed channel wise (univariate normalization) or for all channels together (multivariate normalization, or whitening). For the latter, the covariance characterization must be specified (based on baseline, epochs, or for each time point).


EEG is a differential measure and in non-clinical EEG is usually recorded relative to a fixed reference (in contrast to clinical practice, which usually uses bipolar montages). While EEG is recorded relative to some reference, it can later be re-referenced by subtracting the values of another channel or weighted sum of channels from all channels. The need for re-referencing depends on the goals of the analysis and EEG measures used (e.g., the average reference, see below) is beneficial for evaluation of coherence and for source modelling, although it may produce an inflated level of synchrony or leakage of activity from one region to another when assessed in sensor space). Re-referencing does not change the shape of the scalp topography at a single time sample, however, this can cause issues when working in sensor space with isolated sensors (Hari & Puce, 2017). Specifically, the shape of the recorded waveforms at specific electrodes can be altered and this will also affect the degree of distortion of waveforms by artifacts. Hence, when comparing across experiments, the references used should be taken into account, and if unusual, the reference choice should be justified.

Re-referencing relative to the average of all channels (common average reference, CAR) is most common for high-density recordings as the first step in EEG source localization. The main assumption behind the AR is that the summed potentials from electrodes spaced evenly across the entire head should be zero (Bertrand et al., 1985). While this is usually not a problem for EEG datasets of 128 sensors or more (Srinivasan et al., 1998; Nunez & Srinivasan, 2006), for low density recordings and ROI-based analyses in sensor space there is a serious risk of violating the assumptions for the average reference and the possibility of introducing shifts in potentials (Hari & Puce, 2017). Hence, the AR should be avoided in low-density recordings (e.g., < 64 channels).

An alternative to the AR is the Reference Electrode Standardization Technique (or REST) (Yao, Dezhong, 2001). Both the AR and REST have been shown to be the extremes of a family of Bayesian reference estimators (Hu et al. 2018). If the focus of the data analysis will be on source space inference no re-referencing is theoretically necessary. That said, re-referencing of the data may be useful for comparisons to existing literature.

For MEG data, the order (e.g., 3rd order with CTF data) of gradiometer re-referencing should be reported (if applicable), as well as when in the preprocessing pipeline this step occurs.

Spectral and time-frequency analysis

A common approach for the analysis of MEEG data is to examine the data in terms of its frequency content, and these analyses are applicable for both task-related as well as resting state designs. One important caveat for these types of analyses is that the acquired data should have been sampled at around 5 times the oscillatory frequencies of interest, due to the analogue anti-aliasing filter characteristics – underscoring the importance of planning all data analyses prior to data acquisition, ideally during the design of the study.

In task-related designs MEEG activity can be classified as evoked (i.e., be phase-locked to task events/stimulus presentation) or induced (i.e., related to the event, but not exactly phase-locked to it). Hence, it is important to specify what type of activity is being studied. The domain in which the analysis proceeds (time and frequency or frequency alone) should be specified, as should the spectral decomposition method used (e.g., wavelets, spectral analyses), and whether the data are expressed in sensor or source space. These methods can be the precursor to the assessment of functional connectivity (see Section 4.6).

The spectral decomposition algorithm, as well as parameters used,  should be specified in sufficient detail since these crucially affect the outcome. Therefore, depending on the decomposition method used (e.g., wavelet convolution, Fourier decomposition, Hilbert transformation of bandpass-filtered signals, or parametric spectral estimation), one should describe the type of wavelet (including the tuning parameters), the exact frequency or time-frequency parameters (frequency and time resolutions), exact frequency bands, number of data points, zero padding, windowing (e.g., a Hann or Hanning window), spectral smoothing need to be specified. It is relevant to note that the required frequency resolution is defined as the minimum frequency interval that two distinct underlying oscillatory components need to have in order to be dissociated in the analysis (Bloomfield, 2004; Boashash, 2003). This should not be mistaken with the increments at which the frequency values are reported (e.g., when smoothing or oversampling is used in the analyses). When using overlapping windows (e.g., in Welch’s method) or using Multi-taper windows for robust estimation, the potential spectral smoothing may lead to closely spaced narrow frequency bands to blend. This should be carefully considered and reported.

Source modelling

MEEG data are recorded from outside the head. Source modelling is an attempt to explain the spatio-temporal pattern of the recorded data in sensor space as resulting from the activity of specific neural sources within the brain (within source space), a process known as solving the inverse problem. Since there is no unique solution to the inverse problem, it is mathematically ill-posed and needs to be estimated using additional assumptions. Source modelling first requires a solution of the forward problem, which predicts the effect of the tissues in the head on the propagation of activity to MEEG sensors. These procedures require a volume conduction model of the head and a source model, both of which can crucially influence the accuracy and reliability of the results (Baillet et al., 2001; Michel & He, 2018). Practically, the forward model (or lead field matrix) describes the magnetic field or potential distributions in sensor space that result from a predefined set of (unit amplitude) sources. The sources are typically defined either in a volumetric grid or on a cortically constrained sheet. Information from the forward model is then used to estimate the solution of the inverse problem, in which the measured MEEG signals are attributed to active sources within the brain. It is important to note that source modelling procedures essentially provide approximations of the inverse solution as solved under very specific assumptions or constraints.

In addition to the MEEG data itself, two other types of data are needed to generate solutions to the forward and inverse problems. These are the spatial locations of the sensors and fiducials relative to the head (Section 3.2), and details of the type of geometric data that the MEEG data will be coregistered e.g., spherical head model or structural MRI. The procedure used to coregister the locations of measurement sensors and fiducials with geometric data must be described (see Section 2.1 for definitions; Section 3.2 for sensor digitization methods). If using structural MRI data, it should be made clear if a normalized structural MRI volume such as the MNI152 template, or individual participant MRIs are being used for data analyses. If high-resolution structural MRIs are being acquired in individual participants, all data acquisition parameters, as well as normalization procedures should be described.

It is essential that all details of the head model and the source model are given. For EEG, the numerical method (e.g., boundary element modelling (BEM), finite element modelling (FEM)) used to model the head by reconstruction from the structural MRI of the head must be reported, and the values of electrical conductivity of the different tissues that were used in the calculations must be specified. This is less of a problem for MEG where magnetic fields are not greatly distorted by passing through different tissue types (Baillet, 2017). For the volume conductor model, the type of distribution of the solution space and the distance between points in the space need to be reported, as well as the method used to extract the gray matter mantle, if the solution points are restricted to it. In this case, the head model should ideally be based on an individual MRI of the participant’s head, especially in clinical studies where brain lesions or malformations may be involved. In certain clinical settings, approximate head models might be adequate, although their limitations should be explicitly acknowledged (Valdés-Hernández et al. 2009). The source localization method (e.g., equivalent current dipole fitting, distributed model, dipole scanning), software and its version (e.g., BESA, Brainstorm, Fieldtrip, LORETA, MNE, Nutmeg, SPM, etc.) must be reported, with inclusion of parameters used (e.g., the regularization parameter) and appropriate reference to the technical paper describing the method in detail.

Connectivity analysis

We refer here to connectivity analyses as any methods that aim to detect functional coupling between two or more channels or sources. It is important to report and justify the use of either sensor or source space for the calculation of derived metrics of coupling (e.g., network measures such as centrality or complexity). A recent general reference on connectivity measures can be found in O’Neill et al. (2018).

When using multivariate measures (either data-driven or model-based) such as partial coherence and multiple coherence, the exact variables that have been analysed and the exact variables with respect to which the data was partialised, marginalised, or conditioned should be determined. When reporting measures of data-driven spectral coherence or synchrony (Halliday et al. 1995) to follow should be considered and reported: the exact formulation (or reference), whether the measure has been debiased, any subtraction or normalisation with respect to an experimental condition or a mathematical criterion. In case of Auto-Regressive (AR)-based multivariate modelling (e.g., in the Partial Directed Coherence group of measures; Baccala and Sameshima (2001), the exact model parameters (number of variables, data points and window lengths, as well as the estimation methods and fitting criteria) should be reported.

While the committee agrees that statistical metrics of dependency can be obtained at the channel level, it should be clear that these are not per se measures of neural connectivity Haufe et al., 2012). The latter can only be obtained by an inferential process that compensates for volume conduction and spurious connections due to unobserved common sources or cascade effects. In spite of that, dependency measures can be useful for e.g., biomarking. The possible insight into brain function derived from these measures should be critically discussed. This is particularly important since the interpretation of MEEG-based connectivity metrics may be confounded by aspects of the data that do not directly reflect true neural events (Schoffelen & Gross J. 2009, Valdes-Sosa et al. 2011). Inference about connectivity between neural masses can only be performed with dependency measures at the source level and correct inferential procedures.

Table 3. Data pre-processing and processing checking-sheet

Workflow – Indicate in detail the exact order in which preprocessing steps took place
Software – Which software and version was/were used for preprocessing and processing, as well as the analysis platform

– In-house code, should be shared/made public

Generic preprocessing – Indicate any downsampling of the data

– If electrodes/sensors were removed, which identification method was used, which ones were deleted, if missing channel interpolation is performed indicate which method

– Specify detrending method (typically polynomial order) for baseline correction

– Specify noise normalization method (typically used in multivariate analyses)

– If data segmentation is performed, indicate the number of epochs per subject per condition

– Indicate the spectral decomposition algorithm and parameters, and if applied before/after segmentation

Detection/rejection/correction of artifacts – Indicate what types of artifact present in the data

– For automatic artifact detection, describe algorithms used and their respective parameters (e.g., amplitude thresholds)

– For manual detection, indicate the criteria used with as much detail as needed for reproducibility

– Indicate if trials with artifacts were rejected or corrected. If using correction, indicate method(s) and parameters

– If trials/segments of data with artifacts have been removed, indicate the average number of remaining trials per condition across participants (include minimum and maximum number of trials across participants)

– For resting state data, specify the length of time of the artifact-free data

Correction of artifacts using BSS/ICA – Indicate how many total components were generated, what type of artifact was identified and how, and how many components were removed (on average across participants)

– Display example topographies of the ICs that were removed

Filtering – Indicate type of filter

– Include filter parameters: frequency cutoff , roll-off rate/slope, etc.

Re-referencing (for EEG) – Report the digital reference and how this was computed

– Justify choice of the re-reference scheme

Source modelling – Method of co-registration of measurement sensors to structural MRI scan of the participant’s head or MRI template (for EEG in particular)

– Volume conductor model (e.g., BEM/FEM) and tissue conductivity values (for EEG)

– Source model details (e.g., dipole, distributed,dipole scanning)

– Report parameters used for source estimation (i.e., regularization of the data covariance matrix; constraints used for source model)

Connectivity – Sensor or source space?

– Detail exact variables that have been analysed (which of the data was partialised, marginalised, or conditioned)

– For model based approach, indicate model parameters

– Specify metrics of coupling


5. Biophysical and Statistical Analyses

Properties of the data submitted to statistical analysis

When analysis focuses on specific channels, source-level regions of interest, peaks, components (see also Section 6.1.1 on nomenclature related to this term), time and/or frequency windows, it is essential to report how these were determined, and where appropriate, why this mode of selection is unbiased. One should also report whether specific data were left out and how much of the total data this represents. Special care must be taken to avoid circular analyses or fishing, also known as “double dipping” (e.g., by selecting for analysis a specific channel on the grounds that it shows the strongest grand average difference and then performing statistical testing with the same data on that channel) (Kriegeskorte et al., 2009; Kriegeskorte et al., 2010). In other words, the criteria for selecting a given channel must be independent from the statistical test of interest (e.g., based on an orthogonal contrast), or on a priori assumptions derived from previous studies/independent data).

Region-of-interest (ROI) analysis in time, frequency or space should be used with caution, and, unless justified a priori or via independent data (session or run), is better accompanied by an analysis incorporating the full data space. For time/frequency ROIs, one must define how peaks, components, latencies were measured (e.g., manually or automatically) and whether peak amplitude (or peak-to-peak amplitude), averages around the peak or area under the curve measures were used. When peaks are the object of analysis, the following items should be specified: Whether the peak latency was determined on the group average and then the amplitude was measured at or around this latency for every participant, or whether the peak latency was determined individually for each participant and by which criterion (e.g., the most negative value within a given window). If automated methods were used, report which criteria/parameters were applied or if applicable, which peak detection method (and software) was used. Reporting this information is especially pertinent in ERP studies because of the specification of the “baseline” period to which sensory, cognitive or motor activity is referenced For spatial ROIs, because of the smooth spatial distribution of MEEG data, focus on isolated regions of interest, without consideration of spatial distribution of signal strength in their wider neighbourhood, may yield incorrect estimates of activation and connectivity patterns. The dimensionality of source-level descriptions may be reduced by merging neural signals for a reasonable number of cortical parcels; the parcellation scheme must be defined.

Regardless of the statistical framework employed to analyse ROI data, it is recommended that assumptions used in the model be checked (e.g., normality of residuals) and appropriate corrections be performed to make the statistical tests more conservative and maintain the false positive rate at the nominal level.

Mass univariate statistical modelling

Mass-univariate statistics can be performed at the participant level, group level or both, using a hierarchical or mixed model approach, and for the whole data volume (3D space for source analysis), and/or the spatio-temporal space for channel analysis over time (Kilner et al., 2005, Pernet et al. 2011). It is essential to report the detail of each design, including the software (and its version), as well as its functions. For instance, describe all regressors included at the participant level, and then which ones were used at the group level. When stimuli or participant parameters are regressed, describe how the regressors (predictors and interactions) in the final model were selected and which model selection procedures were used, if any. If only group-level analyses are performed on averages, specify if weighting has been performed and/or if a pooling of channels was implemented. Compared to tomographic methods, MEEG can have missing data (e.g., bad channels, or transient intervals with artifacts). It is essential to report whether missing data have been dealt with in the dataset itself, e.g., replacement of bad channels by means of interpolation (see Section 4), or if missing data have been handled in statistical analyses.

Since many statistical tests are typically performed on MEEG datasets, results must be corrected for multiple testing/comparisons (e.g., full brain analyses or multiple feature/component maxima). The method used (e.g., random field theory, maximum statistics based on permutation, maximum cluster mass based on bootstrap, threshold-free cluster enhancement, Bonferroni, false discovery rate, empirical Bayesian inference) must be reported together with the adopted threshold. Note that both a priori and a posteriori (i.e., derived from autocorrelation on observed data) thresholds based on successive data points (Guthrie & Buchwald, 1991) do not provide adequate techniques to control Type 1 family-wise error and must, therefore, be avoided (Piai et al., 2015). This is in contrast to a posteriori thresholds using null distributions (bootstrap and permutations) which have been shown to control well the family-wise Type 1 error rate (Maris and Oostenveld, 2007, Pernet et al. 2014). When used, report which technique and software (and version) were used.

Multivariate modelling and predictive models

Multivariate statistical inference

Multivariate statistical tests (e.g. MANOVA, Linear Discriminant Analysis) can be performed on MEEG data and often proceed using one data dimension, leading to many statistical tests. For example, a linear discriminant analysis (LDA) can be performed over sensor space repeatedly over time and/or frequencies. Conversely, multiple predetermined time/frequency points for each channel (or source location) can be used, and the classification can be performed per channel. In any cases, this results in a multiple comparisons problem that needs to be properly addressed, typically incorporated into a resampling scheme (bootstrap or permutation).

Multivariate pattern classification

When a decoding approach is used, one must describe: (i) the classifier used (e.g., LDA, Support Vector Machine (SVM), Naive Bayes, Elastic Net, etc.) and its implementation/software; (ii) the distance metric (e.g., Euclidean distance, Pearson correlation, Spearman correlation); (iii) whether there was any parameter selection for the classifier (e.g., by optimizing parameters in a subset of trials/participants, keeping the default options of some software); (iv) how chance performance was computed (e.g., empirically, with random permutations, etc.); (v) the validation scheme (e.g., leave one/two out, N-fold cross-validation) in which the test set is independent of the training set, avoiding bias and unrealistically high classification rates. If surrogate data creation is part of the analysis, then the technique and also details of parameters used to generate surrogate data to evaluate the chance performance of the decoder should be recorded.

Source modelling

Source modelling and reconstruction can be regarded as a step in the processing pipeline (see Section 4.4.) that is used to obtain a dependent variable (e.g., amount of power in a particular frequency band at location X), which can subsequently be subjected to a univariate statistical test. However, before analyzing the source activity(ies), it is essential to provide readers with information on the quality of the reconstruction.

Since there are multiple available methods to estimate sources using inverse solutions, the expected accuracy of the method should be described, including the point-spread-function and localization error. The effect of the number of electrodes on the accuracy of localization should be taken into account (e.g., estimation based on electrodes from the (clinically-based) 10-20 system or even the 10-10 electrode system may potentially be less accurate than estimates based on 128 electrodes or more). Ideally, the robustness of the particular estimate to the choice of parameters should also be reported (e.g., how much does a different choice of regularization or different interval modelled change the solution). In addition, where estimates are performed on multiple participants, error measures (variance) captured by the model should be reported. For dipolar models, uncertainty about the location should also be reported (i.e., spatial confidence bounds, e.g., Fuchs et al., 2004).

Biophysical modelling and connectivity analyses

It needs to be clearly stated and justified how the connectivity metrics, that are subjected to subsequent statistical evaluation, are derived. The type of statistical dependence measure in either sensor or source space used should be specified (e.g., correlation, phase coupling, amplitude coupling, spectral coherence, entropy, DCM, Granger causality), as well as the assumptions underlying the analysis (e.g., linear vs unspecified; directional vs non-directional). The calculation of specific graph theoretical measures on the basis of dependency measures should be motivated and correctly associated to the data (e.g., the interpretation of shorter path length is often used, but in the context of functional adjacency matrices, its meaning has been questioned, Sporns (2014)). It should be clearly stated whether a generative model is used (and what data types form inputs for it), or whether the measure assumes some specific feature of the data distribution (e.g., one versus two different populations of participants). It is necessary to state the nodes used for the connectivity matrix (e.g., channels, sources), the function used for the time-frequency decomposition (e.g., Morlet, Hilbert, Fourier, etc.) and the type of statistics used.

For biophysical methods such as Dynamic Causal Modelling, details should be given of the neural model employed (e.g., ERP, canonical microcircuit), the full space of functional architectures considered and connectivity matrices present/modulated (forward, backward, lateral, if intrinsic), the vector of between-trial effects, the number of modes, the temporal window modelled, and the priors on source locations. Finally, information should be provided on the statistical approach used for inference at the level of the model or the family of models (Fixed- or Random-effects, FFX or RFX) as well as at the level of parameters (Frequentist versus Bayesian, Bayesian Model Averaging (BMA) over all models versus conditioning on the winning family/model, etc).

Table 4. Biophysical and statistical analyses check-sheet

ROIs How were these were determined, i.e. what was the mode of selection (e.g., a priori from literature or independent data)?

– Report specific channel/regions of interest, peaks, components, time and/or frequency window, source

Summary measures – Report how these were obtained

– Justify how the selection of dependent variables is unbiased (especially how the temporal and spatial ROIs were chosen)

– how peaks, components, latencies were measured

Statistical analysis/modelling – Software and version used, and analysis platform

– Report model used including all regressors (and covariates of no interest)

– Check and report statistical assumptions (e.g., normality, sphericity)

– Provide model details when complex designed are used

– Provide details on classification method and validation procedure

– Note method used for multiple comparisons correction and chosen level of statistical significance

– Report classifier used, the distance metric used and the parameters

– How was chance level determined

– Detail cross-validation scheme

Source modelling – Indicate quality of the model (goodness of fit, percentage of variance explained, residual mean squares)

– Report spatial uncertainty for dipolar sources

Connectivity analyses – Sensor or source space

– Software and version, and analysis platform

– Domain, type of connectivity and measure(s) used

– Definitions of nodes/regions of interest

DCM – Specify type of neuronal model

– Ensure fit of model to data before comparing different models

– Describe modulatory effects, confounds and mitigating procedures

– Define all connectivity architectures tested and connectivity matrices present and modulated

– Describe statistics used for model/family inference

(Random vs. Fixed effects) and parameter inference (Frequentist vs. Bayesian)