Information

Minimum constant neuron firing rate

Minimum constant neuron firing rate



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Please forgive what may be an elementary question for many of you.

I am trying to understand the range of firing rates in an idealized neuron. I understand what governs the maximum firing rate of a neuron (the refractory period), however I would like to know how to figure out what the minimum firing rate would be.

The context of this neuronal excitation would be one of applying a constant, sustained electrical stimulus which barely satisfies the threshold requirement for an action potential to occur in the neuron. Put otherwise, the neuron would be exposed to the lowest level of sustained stimulation required to cause a steady spike train.

I have found information on average firing rates (in various species and in various parts of the nervous system and in response to various amplitudes of stimuli), but I have not found anything yet which describes how to find the lowest rate of firing in an idealized neuron. I also am unclear as to whether the threshold of an action potential refers to the minimum required to cause a single, solitary action potential (and it will not be repeated even if the stimulus is kept steady) OR if the threshold implies that the action potential would be repeated steadily. Please forgive my ignorance of the subject, as I am only just starting to study this fascinating and intimidating subject.

Thank you for your time and consideration!


Minimum constant neuron firing rate - Psychology

Neurons use sterotyped, pulsed signals called action potentials to signal over long distances. Since the shape of the action potential is believed to carry minimal information, the time of arrival of the action potential carries all the information. Often the detailed voltage waveform is abstracted into a stream of binary events where most of the stream represents no action potential occurence ('zeros') and an isolated '1' represents an action potential. The binary waveform is refered to as a spike train.

  • A neuron could signal one value by rate, another by the variance in the rate, and another by skew in the rate.
  • Relative timing in two neurons might signal intensity or phase or causality.
  • Two neurons could signal four different conditions if their spikes were treated as binary words. Synchrony would be important.
  • The order of occurance of specific interspike intervals could modulate a synapse in a specific fashion.

Since spike trains are sending messages to synapes (typically), it might be useful to ask how we can interpert a spike train in a way which makes synaptic sense. A first approximation might ask: Over what time scale does the synapse integrate information? If the integration time is very short, then only events which are close together matter, so synchrony in a code is a useful concept. On the other hand, if integration time is long, then the total number of spikes matters, so a rate code is a useful concept. Perhaps we should analyse spike trains on every time scale. In addition to time scale, we could analyse trains in several ways, each corresponding to a different (measured, or hyopthesized) aspect of synaptic function. For instance activation and learning might have different effective coding from the same spike train.

We want to talk about schemes for finding patterns in spike trains, Bruce will descibe patterns in EOD discharge data and I will describe some theoretical techniques which seem to hold promise in their flexibility and potential synaptocentric (love those neologisms) orientation. The thread between them is the use of methods that avoid some problems with traditional binning methods. Specifically, we will consider schemes that treat the spike time as the most important variable.

  • relating neural activity to stimuli
  • trying to find repetitive patterns in a motor discharge
  • relationship between decending neuronal activity and motor output (Bruce Carlson)
  • looking for codes distrubuted across a neuron population
  • functional interactions among neurons
  • Post stimulus time histogram -- bin the spikes into time slots referenced to the stimulus.
  • Spike density function (3) and other digital filters (4)
  • Auto- and cross-correlation functions of the entire train using spike density function to smooth
  • ISI distrubutions and return maps (3)
  • Spike-triggered average (5)
  • Spike-centered distance analysis (1), (2)

The techniques we will explore here are based on reference (1), (2)and (3). These avoid problems with binning in time and allow the flexibility of imposing synaptic interpretations on the train. Bruce has talked about how convolving a train with an Gaussian distribution allows a smoother, more relable estimate of burst parameters. What we want to do now is describe how spike trains might be compared to each other for similarity.

To set the stage, let us consider a pure rate code. Each train is then specified by its rate, and the difference between trains is the difference of their rates. We could say the the distance separating the trains is the difference of their rates, or that rate differnce is a distance measure between spike trains. What we would like is a distance measure which works on any time scale. Then we could speak of the distance between two trains with a given time precision. Such a distance measure would allow us to smoothly go from strict synchronization codes (very small spike time difference allowed) to rate codes (large time differences allowed within an overall interval). The time resolution at which you analyse the train might alsocorrespond to the time constant of the synapse.

Two recent papers describe techniques for computing the distance between two spike trains at any time resolution. Victor's paper (1) define the distance between two spike trains in terms of the minimum cost of transforming one train into another. Only three transforming operations are allowed move a spike, add a spike, or delete a spike. The cost of addition or deletion is set at one. The cost of moving a spike by a small time is a parameter which sets the time scale of the analysis. If the cost of motion is very small then you can slide a spike anywhere but it still costs to insert or delete a spike, so the distance between two trains is just the difference in number of spikes. This is similar to a rate code distance. If the cost of motion is very high then any spike is a different spike,so the minimum costs become the cost insert or delete all the spikes. The distance between two trains becomes approximately the number of spikes in the train which are not exactly alligned, sort of a measure of synchrony. Intermediate values of cost interpolate smoothly between perfect synchrony and a rate code.

Rossum's paper (2) computes a distance which is closely related to Victor's distance, but much easier to implement and easier to explain. Each spike train is convolved with an exponential function:

Where ti is the time of occurence of the ith spike. You get to choose the time constant tc of the exponential, which sets the time scale of the distance measurment. Call the convolved waveforms f(t) and g(t). You then form the distance as:

Where dt is the spike sampling time step. This distance could be considered as an approximate differnce between two post-synaptic current sequences triggered by the respective spike trains because such currents tend to have approximately exponential shape. In a sense, the Rossum distance measures the difference in the effect of the two trains on their respective synapses (to a very crude approximation).

The Rossum time scale, tc, and Victor's cost parameter are related by

We can compare the two distance measures over a wide range of time scales using this reciprocal relation. Two of the matlab programs given below do this comparison, but an example here might help. The following image shows two spike trains. The blue train is regular and the red train is a gaussian dithered verson of the blue train. At small time scales, the Victor distance is 8 because 4 spikes are not exactly aligned. This means that 4 spike deletions and 4 insertions are necessary to transform one into the other. At time scales comparable to the dither (in this case std.dev.=10 mSec), the Victor distance starts to drop because it is cheaper to move a spike. At long time scales the Victor distance goes to zero because both trains have the same number of spikes. The Rossum distance falls more smoothly because it depends on a smooth exponential weighting function. The distance at short time scales is similar because the exponentials have essentially fallen to zero. At large time scales the Rossum distance also goes to zero, because it too measures the total number of spikes. Which distance you decide to use depends on how you think the spike train is interpreted post-synaptically. Note that the distance is computed at all possible time scales so that different criteria of synchrony are automatically available.

Once we can compute distances between spike trains, we can try out several techniques outlined in reference (1). The following assumes that trains are caused by some controlled stimulus, and can therefore be a-priori catagorized by the stimulus type. These steps have been implemented in the software described below.

  • Information estimate. How well do the a-priori types and actual trains match up?
    • Compute the pair-wise distances between all recorded trains (at some specified time scale).
    • For each train:
      • Compute the average distance to each a-priori stimulus group (including the group of the current train)
      • Select the group with the minimum average distance as the effective group to which the train belongs.

      Uses of a spike train distance measure:

      • Song length and time scale analysis for cricket song (7). The authors conclude that the time scale of analysis is 10 mSec and about 2 sylables of the song are necessary for full separation of songs.
      • Function of the visual cortex (8).
      • Burst similiarity in a long train. See below where we take a burst located by Bruce Carlson'sconvolution method and slide it along the spike train looking for minimum distance fits.
      • Regular and dithered spike trains.code. An example image is given above.
      • Currents applied to integrate-and-fire neurons (with adaption). code. The first figure below shows the simulated voltage, current and spiketrains from two groups of 5 IF neurons in response to a square current pulse and the same pulse dithered with gaussian noise. The second figure shows the two trains which fire during the stimulus and the resulting Rossum distances, taken pairwise between every possible pair of spike trains (45 total). Red indicates one group, blue the other, and black the cross-distance. Summing the distances over all timescales and plotting them by group shows that the groups clump by distance. For comparison, the simple time bin histogram distance is shown at the bottom.

      • Computing the bit rate from distance clusters. code. The first figure below shows the simulated voltage, current and spiketrains from four groups of 5 IF neurons in response to a square current pulse modulated with 4 different amplitudes of sinewave. The image below shows the current, 20 spike trains (color coded) and the resulting information as a funciton of scale. The peak information is about 1 bit, implying fairly poor discrumination of the 4 groups of spike trains. At large scales the bitrate drops, implying that average spike rate carries little information.
      • Multidimensional scaling and temporal profiling. This code is modified from (6) to include 3D plotting and the temporal profiling procedure.To run, it also needs the programs in this ZIP file, which is from (6). The call to fminu must be modified to call fminunc . The fist image shows the optimal imbedding of the train distances in 3D (data from previous image). The numbers correspond to individual trains and the colors to the 4 groups. The second image is the temporal profile for this 3-space using 5 time-bins for each spike train. It shows that dimension 1 is coding a weighted sum so that high values aling the axis mean a low initial firing rate, then a high rate, followed again by a low rate. The second dimension is encoding a temporal pattern of approximately the first derivitive of dimension 1.
      • Find a burst by minimizing the Rossum distance. code.The figure below shows a EOD train (fist panel)with one burst isolated (by hand in second panel). The summed distance is a simple sum of log-distributed time scale distances(third panel). The bottom panel shows a burst (the one starting at about 13.8 sec) aligned by using the minimum distance (about 170). A separate fit to the burst at abou 9.35 seconds (clasified by Bruce as a different type) shows a minimum distance of about 200. The burst at about 18.7 gives a best fit distance of about 150. A random chunk of train at about 6 seconds gives a minimum distance of 280. Another run scanning from 2 seconds to 19 seconds found all the bursts with distances around 200.Of course, the reference burst gave a distance of zero.

        - Firing pattern analysis in Windoze including ISI, return map, firing rate estimation, and correlation (implements the techniques in (3)).. - Time series analysis - Spike Interchange File Format
    • Signal Processing Techniques for Spike Train Analysis using MatLab - These M-files implement the analysis procedures discussed in chapter 9 of "Methods in Neuronal Modeling".
    • NTSA Workbench - Neuronal Time Series Analysis (NTSA) Workbench is a set of tools, techniques and standards designed to meet the needs of neuroscientists who work with neuronal time series data.
    • NeuroExplorer -- Flexible train analysis


      Exact firing time statistics of neurons driven by discrete inhibitory noise

      Neurons in the intact brain receive a continuous and irregular synaptic bombardment from excitatory and inhibitory pre- synaptic neurons, which determines the firing activity of the stimulated neuron. In order to investigate the influence of inhibitory stimulation on the firing time statistics, we consider Leaky Integrate-and-Fire neurons subject to inhibitory instantaneous post- synaptic potentials. In particular, we report exact results for the firing rate, the coefficient of variation and the spike train spectrum for various synaptic weight distributions. Our results are not limited to stimulations of infinitesimal amplitude, but they apply as well to finite amplitude post-synaptic potentials, thus being able to capture the effect of rare and large spikes. The developed methods are able to reproduce also the average firing properties of heterogeneous neuronal populations.


      1. Introduction

      Spiking neural networks that emulate neural ensembles have been studied extensively within the context of dynamical systems (Izhikevich, 2007), and modeled as a set of differential equations that govern the temporal evolution of its state variables. For a single neuron, the state variables are usually its membrane potential and the conductances of ion channels that mediate changes in the membrane potential via flux of ions across the cell membrane. A vast body of literature, ranging from the classical Hodgkin-Huxley model (Hodgkin and Huxley, 1952), FitzHugh-Nagumo model (FitzHugh, 1961), Izhikevich model (Izhikevich, 2003) to simpler integrate-and-fire models (Abbott, 1999), treats the problem of single-cell excitability at various levels of detail and biophysical plausibility. Individual neuron models are then connected through synapses, bottom-up, to form large-scale spiking neural networks.

      An alternative to this bottom-up approach is a top-down approach that treats the process of spike generation and neural representation of excitation in the context of minimizing some measure of network energy. The rationale for this approach is that physical processes occurring in nature have a tendency to self-optimize toward a minimum-energy state. This principle has been used to design neuromorphic systems where the state of a neuron in the network is assumed to be either binary in nature (spiking or not spiking) (Jonke et al., 2016), or replaced by its average firing rate (Nakano et al., 2015). However, in all of these approaches, the energy functionals have been defined with respect to some statistical measure of neural activity, for example spike rates, instead of continuous-valued neuronal variables like the membrane potential. As a result in these models, it is difficult to independently control different neuro-dynamical parameters, for example the shape of the action-potential, bursting activity or adaptation in neural activity, without affecting the network solution.

      In Gangopadhyay and Chakrabartty (2018), we proposed a model of a Growth Transform (GT) neuron which reconciled the bottom-up and top-down approaches such that the dynamical and spiking responses were derived directly from a network objective or an energy functional. Each neuron in the network implements an asynchronous mapping based on polynomial Growth Transforms, which is a fixed-point algorithm for optimizing polynomial functions under linear and/or bound constraints (Baum and Sell, 1968 Gangopadhyay et al., 2017). It was shown in Gangopadhyay and Chakrabartty (2018) that a network of GT neurons can solve binary classification tasks while producing stable and unique neural dynamics (for example, noise-shaping, spiking and bursting) that could be interpreted using a classification margin. However, in the previous formulation, all of these neuro-dynamical properties were directly encoded into the network energy function. As a result, the formulation did not allow independent control and optimization of different neuro-dynamics. In this paper, we address these limitations by proposing a novel GT spiking neuron and population model, along with a neuromorphic framework, according to the following steps:

      • We first remap the synaptic interactions in a standard spiking neural network in a manner that the solution (steady-state attractor) could be encoded as a first-order condition of an optimization problem. We show that this network objective function or energy functional can be interpreted as the total extrinsic power required by the remapped network to operate, and hence a metric to be minimized.

      • We then introduce a dynamical system model based on Growth Transforms that evolves the network toward this steady-state attractor under the specified constraints. The use of Growth Transforms ensures that the neuronal states (membrane potentials) involved in the optimization are always bounded and that each step in the evolution is guaranteed to reduce the network energy.

      • We then show how gradient discontinuity in the network energy functional can be used to modulate the shape of the action potential while maintaining the local convexity and the location of the steady-state attractor.

      • Finally, we use the properties of Growth Transforms to generalize the model to a continuous-time dynamical system. The formulation will then allow for modulating the spiking and the population dynamics across the network without affecting network convergence toward the steady-state attractor.

      We show that the proposed framework can be used to implement a network of coupled neurons that can exhibit memory, global adaptation, and other interesting population dynamics under different initial conditions and based on different network states. We also illustrate how decoupling transient spiking dynamics from the network solution and spike-shapes could be beneficial by using the model to design a spiking associative memory network that can recall a large number of patterns with high accuracy while using fewer spikes than traditional associative memory networks. This paper is also accompanied by a publicly available software implementing the proposed model (Mehta et al., 2019) using MATLAB © . Users can experiment with different inputs and network parameters to explore and create other unique dynamics than what has been reported in this paper. In the future, we envision that the model could be extended to incorporate spike-based learning within an energy-minimization framework similar to the framework used in traditional machine learning models (LeCun et al., 2006). This could be instrumental in bridging the gap between neuromorphic algorithms and traditional energy-based machine learning models.


      Mechanisms¶ &uarr

      Warning

      The THREADSAFE mechanism case is a bit more complicated if the mechanism anywhere assigns a value to a GLOBAL variable. When the user explicitly specifies that a mechanism is THREADSAFE, those GLOBAL variables that anywhere appear on the left hand side of an assignment statement (and there is no such assignment with the PROTECT prefix) are actually thread specific variables. Hoc access to thread specific global variables is with respect to a static instance which is shared by the first thread in which mechanism actually exists.

      capacitance

      Syntax:

      section.cm (uF/cm2)

      section.i_cap (mA/cm2)

      Description: capacitance is a mechanism that automatically is inserted into every section. cm is a range variable with a default value of 1.0. i_cap is a range variable which contains the varying membrane capacitive current during a simulation. Note that i_cap is most accurate when a variable step integration method is used. Syntax: section.insert('hh') Description:

      See <nrn src dir>/src/nrnoc/hh.mod

      Hodgkin-Huxley sodium, potassium, and leakage channels. Range variables specific to this model are:

      This model used the na and k ions to read ena, ek and write ina, ik.

      See <nrn src dir>/src/nrnoc/passive0.c

      Passive membrane channel. Same as the pas mechanism but hand coded to be a bit faster (avoids the wasteful numerical derivative computation of the conductance and does not save the current). Generally not worth using since passive channel computations are not usually the rate limiting step of a simulation.

      extracellular

      Syntax:

      section.insert('extracellular')

      .vext[2] -- mV

      .i_membrane -- mA/cm2

      .xraxial[2] -- MOhms/cm

      .xg[2]         -- mho/cm2

      .xc[2]         -- uF/cm2

      .extracellular.e -- mV

      Description:

      Adds two layers of extracellular field to the section. Vext is solved simultaneously with the v. When the extracellular mechanism is present, v refers to the membrane potential and vext (i.e. vext[0]) refers to the extracellular potential just next to the membrane. Thus the internal potential is v+vext (but see BUGS).

      This mechanism is useful for simulating the stimulation with extracellular electrodes, response in the presence of an extracellular potential boundary condition computed by some external program, leaky patch clamps, incomplete seals in the myelin sheath along with current flow in the space between the myelin and the axon. It is required when connecting LinearMechanism (e.g. a circuit built with the NEURON Main Menu ‣ Build ‣ Linear Circuit ) to extracellular nodes.

      i_membrane correctly does not include contributions from ELECTRODE_CURRENT point processes.

      See i_membrane_ at CVode.use_fast_imem() .

      The figure illustrates the form the electrical equivalent circuit when this mechanism is present. Note that previous documentation was incorrect in showing that e_extracellular was in series with the xg[nlayer-1],xc[nlayer-1] parallel combination. In fact it has always been the case that e_extracellular was in series with xg[nlayer-1] and xc[nlayer-1] was in parallel with that series combination.

      Note

      The only reason the standard distribution is built with nlayer=2 is so that when only a single layer is needed (the usual case), then e_extracellular is consistent with the previous documentation with the old default nlayer=1.

      e_extracellular is connected in series with the conductance of the last extracellular layer. With two layers the equivalent circuit looks like:

      Extracellular potentials do a great deal of violence to one’s intuition and it is important that the user carefully consider the results of simulations that use them. It is best to start out believing that there are bugs in the method and attempt to prove their existence.

      See <nrn src dir>/src/nrnoc/extcell.c and <nrn src dir>/examples/nrnoc/extcab*.hoc.

      NEURON can be compiled with any number of extracellular layers. See below.

      Warning

      xcaxial is also defined but is not implemented. If you need those then add them with the LinearMechanism .

      Prior versions of this document indicated that e_extracellular is in series with the parallel (xc,xg) pair. In fact it was in series with xg of the layer. The above equivalent circuit has been changed to reflect the truth about the implementation.

      In v4.3.1 2000/09/06 and before vext(0) and vext(1) are the voltages at the centers of the first and last segments instead of the zero area nodes.

      Now the above bug is fixed and vext(0) and vext(1) are the voltages at the zero area nodes.

      From extcell.c the comment is:

      In v4.3.1 2000/09/06 and before extracellular layers will not be connected across sections unless the parent section of the connection contains the extracellular mechanism. This is because the 0 area node of the connection is “owned” by the parent section. In particular, root nodes never contain extracellular mechanisms and thus multiple sections connected to the root node always appear to be extracellularly disconnected. This bug has been fixed. However it is still the case that vext(0) can be non-zero only if the section owning the 0 node has had the extracellular mechanism inserted. It is best to have every section in a cell contain the extracellular mechanism if any one of them does to avoid confusion with regard to (the in fact correct) boundary conditions.


      Tonic firing typically occurs without presynaptic input and can be viewed at as background activity. Tonic activity is often characterized by a steady action potential firing at a constant frequency. Note that not all neurons may have tonic activity at rest. It may serve as keeping a steady background level of a certain neurotransmitter or it can serve as a mechanism where both an inhibition or increase in presynaptic input can be transmitted. When a neuron is silent at rest, only an increase in presynaptic activity can be transmitted postsynaptically.

      In contrast, phasic firing occurs after a neuron is activated due to presynaptic activity and it incurs activity on top of any background activity a neuron may have. It is typically restricted to one, a few, or a short burst of action potentials, whereafter the activity quickly returns to the resting state.


      We want to express our gratefulness to Professor Julio-Cesar Martinez-Trujillo and his research group at Western University in London Canada, for valuable discussions.

      Bartsch, M. V., Loewe, K., Merkel, C., Heinze, H. J., Schoenfeld, M. A., Tsotsos, J. K., et al. (2017). Attention to color sharpens neural population tuning via feedback processing in the human visual cortex hierarchy. J. Neurosci. 37, 10346�. doi: 10.1523/JNEUROSCI.0666-17.2017

      Bressler, S. L., Tang, W., Sylvester, C. M., Shulman, G. L., and Corbetta, M. (2008). Top-down control of human visual cortex by frontal and parietal cortex in anticipatory visual spatial attention. J. Neurosci. 28, 10056�. doi: 10.1523/JNEUROSCI.1776-08.2008

      Buschman, T. J., and Kastner, S. (2015). From behavior to neural dynamics: an integrated theory of attention. Neuron 88, 127�. doi: 10.1016/j.neuron.2015.09.017

      Buschman, T. J., and Miller, E. K. (2007). Top-down versus bottom-up control of attention in the prefrontal and posterior parietal cortices. Science 315, 1860�. doi: 10.1126/science.1138071

      Busse, L., Katzner, S., and Treue, S. (2008). Temporal dynamics of neuronal modulation during exogenous and endogenous shifts of visual attention in macaque area MT. Proc. Natl. Acad. Sci. U.S.A. 105, 16380�. doi: 10.1073/pnas.0707369105

      Carrasco, M. (2011). Visual attention: the past 25 years. Vision Res. 51, 1484�. doi: 10.1016/j.visres.2011.04.012

      Cavelier, P., Hamann, M., Rossi, D., Mobbs, P., and Attwell, D. (2005). Tonic excitation and inhibition of neurons: ambient transmitter sources and computational consequences. Prog. Biophys. Mol. Biol. 87, 3�. doi: 10.1016/j.pbiomolbio.2004.06.001

      Corbetta, M., Miezin, F. M., Dobmeyer, S., Shulman, G. L., and Petersen, S. E. (1991). Selective and divided attention during visual discriminations of shape, color, and speed: functional anatomy by positron emission tomography. J. Neurosci. 11, 2383�.

      Corbetta, M., and Shulman, G. L. (2002). Control of goal-directed and stimulus-driven attention in the brain. Nat. Rev. Neurosci. 3, 201�. doi: 10.1038/nrn755

      Cutzu, F., and Tsotsos, J. K. (2003). The selective tuning model of attention: psychophysical evidence for a suppressive annulus around an attended item. Vision Res. 43, 205�. doi: 10.1016/S0042-6989(02)00491-1

      Dayan, P., and Abbott, L. F. (2001). Theoretical Neuroscience. Cambridge, MA: MIT Press.

      Deco, G., and Lee, T. S. (2002). A unified model of spatial and object attention based on inter-cortical biased competition. Neurocomputing 44, 775�. doi: 10.1016/S0925-2312(02)00471-X

      Desimone, R., and Duncan, J. (1995). Neural mechanisms of selective visual attention. Annu. Rev. Neurosci. 18, 193�. doi: 10.1146/annurev.ne.18.030195.001205

      Destexhe, A., Mainen, Z. F., and Sejnowski, T. J. (1998). Kinetic models of synaptic transmission. Methods Neuronal Modeling 2, 1�.

      Dorval, A. D., and White, J. A. (2005). Channel noise is essential for perithreshold oscillations in entorhinal stellate neurons. J. Neurosci. 25, 10025�. doi: 10.1523/JNEUROSCI.3557-05.2005

      Eagleman, D. M., and Sejnowski, T. J. (2000). Motion integration and postdiction in visual awareness. Science 287, 2036�. doi: 10.1126/science.287.5460.2036

      Fallah, M., Stoner, G. R., and Reynolds, J. H. (2007). Stimulus-specific competitive selection in macaque extrastriate visual area V4. Proc. Natl. Acad. Sci. U.S.A. 104, 4165�. doi: 10.1073/pnas.0611722104

      Ferrera, V. P., Rudolph, K. K., and Maunsell, J. H. (1994). Responses of neurons in the parietal and temporal visual pathways during a motion task. J. Neurosci. 14, 6171�.

      Hodgkin, A. L., and Huxley, A. F. (1952). A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 117, 500�. doi: 10.1113/jphysiol.1952.sp004764

      Hopf, J. M., Boehler, C. N., Luck, S. J., Tsotsos, J. K., Heinze, H. J., and Schoenfeld, M. A. (2006). Direct neurophysiological evidence for spatial suppression surrounding the focus of attention in vision. Proc. Natl. Acad. Sci. U.S.A. 103, 1053�. doi: 10.1073/pnas.0507746103

      Hutt, A. (2012). The population firing rate in the presence of GABAergic tonic inhibition in single neurons and application to general anaesthesia. Cogn. Neurodyn. 6, 227�. doi: 10.1007/s11571-011-9182-9

      Itti, L. (2005). Models of bottom-up attention and saliency. Neurobiol. Attent. 582, 576�. doi: 10.1016/B978-012375731-9/50098-7

      Itti, L., and Koch, C. (2001). Computational modelling of visual attention. Nat. Rev. Neurosci. 2, 194�. doi: 10.1038/35058500

      Itti, L., Rees, G., and Tsotsos, J. K. (2005). Neurobiology of Attention. Waltham, MA: Elsevier/Academic Press.

      Izhikevich, E. M. (2004). Which model to use for cortical spiking neurons? IEEE Trans. Neural Netw. 15, 1063�. doi: 10.1109/TNN.2004.832719

      James, W. (1891). The Principles of Psychology, Vol. 2, London, UK: Macmillan.

      Jensen, O., Goel, P., Kopell, N., Pohja, M., Hari, R., and Ermentrout, B. (2005). On the human sensorimotor-cortex beta rhythm: sources and modeling. Neuroimage 26, 347�. doi: 10.1016/j.neuroimage.2005.02.008

      Kandel, E. R., Schwartz, J. H., Jessell, T. M., Siegelbaum, S. A., and Hudspeth, A. (2000). Principles of Neural Science. New York, NY: McGraw-hill.

      Khayat, P. S., Niebergall, R., and Martinez-Trujillo, J. C. (2010). Attention differentially modulates similar neuronal responses evoked by varying contrast and direction stimuli in area MT. J. Neurosci. 30, 2188�. doi: 10.1523/JNEUROSCI.5314-09.2010

      Koch, C., and Ullman, S. (1987). “Shifts in selective visual attention: towards the underlying neural circuitry,” in Matters of Intelligence, ed L. M. Vaina (Dordrecht: Springer), 115�.

      Kosai, Y., El-Shamayleh, Y., Fyall, A. M., and Pasupathy, A. (2014). The role of visual area V4 in the discrimination of partially occluded shapes. J. Neurosci. 34, 8570�. doi: 10.1523/JNEUROSCI.1375-14.2014

      Ladenbauer, J., Augustin, M., and Obermayer, K. (2014). How adaptation currents change threshold, gain, and variability of neuronal spiking. J. Neurophysiol. 111, 939�. doi: 10.1152/jn.00586.2013

      Lee, J., and Maunsell, J. H. (2009). A normalization model of attentional modulation of single unit responses. PLoS ONE 4:e4651. doi: 10.1371/journal.pone.0004651

      Lennert, T., and Martinez-Trujillo, J. (2011). Strength of response suppression to distracter stimuli determines attentional-filtering performance in primate prefrontal neurons. Neuron 70, 141�. doi: 10.1016/j.neuron.2011.02.041

      Loach, D. P., Tombu, M., and Tsotsos, J. K. (2005). Interactions between spatial and temporal attention: an attentional blink study. J. Vis. 5:109. doi: 10.1167/5.8.109

      Luck, S. J., Chelazzi, L., Hillyard, S. A., and Desimone, R. (1997). Neural mechanisms of spatial selective attention in areas V1, V2, and V4 of macaque visual cortex. J. Neurophysiol. 77, 24�. doi: 10.1152/jn.1997.77.1.24

      Martinez-Trujillo, J. C., and Treue, S. (2004). Feature-based attention increases the selectivity of population responses in primate visual cortex. Curr. Biol. 14, 744�. doi: 10.1016/j.cub.2004.04.028

      McAdams, C. J., and Maunsell, J. H. (1999). Effects of attention on orientation-tuning functions of single neurons in macaque cortical area V4. J. Neurosci. 19, 431�.

      Niebur, E., and Koch, C. (1994). A model for the neuronal implementation of selective visual attention based on temporal correlation among neurons. J. Comput. Neurosci. 1, 141�. doi: 10.1007/BF00962722

      Oliva, A., Torralba, A., Castelhano, M. S., and Henderson, J. M. (2003). “Top-down control of visual attention in object detection,” in Image Processing, 2003. ICIP 2003. Proceedings. 2003 International Conference on: IEEE (Barcelona), I253–I256.

      Pape, H. C. (1996). Queer current and pacemaker: the hyperpolarization-activated cation current in neurons. Annu. Rev. Physiol. 58, 299�. doi: 10.1146/annurev.ph.58.030196.001503

      Pestilli, F., Viera, G., and Carrasco, M. (2007). How do attention and adaptation affect contrast sensitivity? J. Vis. 7:9. doi: 10.1167/7.7.9

      Posner, M. I. (2011). Cognitive Neuroscience of Attention. New York, NY: Guilford Press.

      Rao, S. C., Rainer, G., and Miller, E. K. (1997). Integration of what and where in the primate prefrontal cortex. Science 276, 821�. doi: 10.1126/science.276.5313.821

      Reynolds, J. H., Chelazzi, L., and Desimone, R. (1999). Competitive mechanisms subserve attention in macaque areas V2 and V4. J. Neurosci. 19, 1736�.

      Reynolds, J. H., and Heeger, D. J. (2009). The normalization model of attention. Neuron 61, 168�. doi: 10.1016/j.neuron.2009.01.002

      Rothenstein, A. L., and Tsotsos, J. K. (2014). Attentional modulation and selection𠄺n integrated approach. PLoS ONE 9:e99681. doi: 10.1371/journal.pone.0099681

      Rutishauser, U., Walther, D., Koch, C., and Perona, P. (2004). “Is bottom-up attention useful for object recognition?,” in Computer Vision and Pattern Recognition, 2004. CVPR 2004, Proceedings of the 2004 IEEE Computer Society Conference on: IEEE (Washington, DC), II37–II44.

      Shriki, O., Sompolinsky, H., and Hansel, D. (2003). Rate models for conductance based cortical neuronal networks. Neural Comput. 15, 1809�. doi: 10.1162/08997660360675053

      Spratling, M. W., and Johnson, M. H. (2004). A feedback model of visual attention. J. Cogn. Neurosci. 16, 219�. doi: 10.1162/089892904322984526

      Tsotsos, J. K. (1990). Analyzing vision at the complexity level. Behav. Brain Sci. 13, 423�. doi: 10.1017/S0140525X00079577

      Tsotsos, J. K. (2011). A Computational Perspective on Visual Attention. Cambridge: MIT Press.

      Tsotsos, J. K., Culhane, S. M., Wai, W. Y. K., Lai, Y., Davis, N., and Nuflo, F. (1995). Modeling visual attention via selective tuning. Artif. Intell. 78, 507�. doi: 10.1016/0004-3702(95)00025-9

      Tsotsos, J., and Rothenstein, A. (2011). Computational models of visual attention. Scholarpedia 6:6201. doi: 10.4249/scholarpedia.6201

      van Aerde, K. I., Mann, E. O., Canto, C. B., Heistek, T. S., Linkenkaer-Hansen, K., Mulder, A. B., et al. (2009). Flexible spike timing of layer 5 neurons during dynamic beta oscillation shifts in rat prefrontal cortex. J. Physiol. 587(Pt 21), 5177�. doi: 10.1113/jphysiol.2009.178384

      Whittington, M. A., Traub, R. D., Kopell, N., Ermentrout, B., and Buhl, E. H. (2000). Inhibition-based rhythms: experimental and mathematical observations on network dynamics. Int. J. Psychophysiol. 38, 315�. doi: 10.1016/S0167-8760(00)00173-2

      Wiesenfeld, K., and Moss, F. (1995). Stochastic resonance and the benefits of noise: from ice ages to crayfish and SQUIDs. Nature 373, 33�. doi: 10.1038/373033a0

      Williford, T., and Maunsell, J. H. (2006). Effects of spatial attention on contrast response functions in macaque area V4. J. Neurophysiol. 96, 40�. doi: 10.1152/jn.01207.2005

      Wilson, H. R. (1999). Spikes, Decisions, and Actions: the Dynamical Foundations of Neurosciences, Don Mills, ON: Oxford University Press, Inc.

      Keywords: visual attention, single cell, ST-neuron, firing rate, neural selectivity

      Citation: Avella Gonzalez OJ and Tsotsos JK (2018) Short and Long-Term Attentional Firing Rates Can Be Explained by ST-Neuron Dynamics. Front. Neurosci. 12:123. doi: 10.3389/fnins.2018.00123

      Received: 11 August 2017 Accepted: 15 February 2018
      Published: 02 March 2018.

      Xavier Otazu, Universitat Autònoma de Barcelona, Spain

      Keith Schneider, University of Delaware, United States
      Jihyun Yeonan-Kim, San Jose State University, United States

      Copyright © 2018 Avella Gonzalez and Tsotsos. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.


      Results

      Exposure to chronic cold induced a long-lasting decrease in DA neuron population activity

      Rats were randomly assigned to either control or chronic cold groups, and the effect of stress on DA neuron activity was assessed. In control rats, the number of spontaneously active DA neurons encountered per electrode track was 1.1 ± 0.1 (n= 9 rats, 85 DA neurons), which is consistent with our previous studies (Valenti & Grace, 2010 Valenti et al., 2011). Two-week exposure to cold significantly reduced population activity by 46% to 0.5 ± 0.03 neurons/track (n= 6 rats, 27 DA neurons CTRL vs CCE: one-way ANOVA, F(2,20)= 16.9, P< 0.001 Figure 1A ), which is consistent with our previous report (Moore et al., 2001). To investigate whether chronic cold induced persistent changes in VTA DA neuron activity, following cold exposure a subgroup of CCE rats was housed in the ambient-temperature colony for an extra 7 days. In these rats, the number of spontaneously active DA neurons remained significantly reduced when compared to controls (15.1% below controls CTRL vs 7d post CCE: 0.8 ± 0.1, n= 8 rats, 57 DA neurons one-way ANOVA, F(2,20)= 16.9, P< 0.001 Figure 1A ), nevertheless the population activity was significantly higher than observed in CCE rats tested 18� hours after removal from cold (CCE vs 7d post CCE: one-way ANOVA, F(2,20)= 16.9, P< 0.001 Figure 1A ). We reported previously that different stressors can exert differential effects on DA neuron population activity depending on the relative location of the neurons within the VTA (Valenti et al., 2011). Thus, we further examined whether the effects of CCE were dependent on the location of the DA neurons across the medial-lateral extent of the VTA. We found that the CCE-induced decrease in population activity occurred primarily in the DA neurons located either in the medial (CTRL= 1.3 ± 0.2, n= 34 DA neurons CCE= 0.7 ± 0.1, n= 12 DA neurons one-way ANOVA, F(2,20)= 3.82, P= 0.039 Figure 1B ) or in the central (CTRL= 1.1 ± 0.1, n= 29 DA neurons CCE= 0.4 ± 0.1, n= 8 DA neurons one-way ANOVA, F(2,20)= 4.51, P= 0.024 Figure 1B ) part of the VTA for rats tested on the day after cold removal. However, no difference was observed in the lateral part of the VTA (one-way ANOVA, P= 0.939 Figure 1B ). In contrast, DA neurons recorded from rats tested 7 days after CCE did not exhibited differences in activity in any of the 3 locations examined, when either compared to control or to CCE tested on the following day (CTRL vs 7d post CCE, one-way ANOVA: medial, P= 0.20 central, P= 0.953 lateral, P= 0.939 CCE vs 7d post CCE, one-way ANOVA: medial, P= 1.00 central, P= 0.180 lateral, P= 0.939 Figure 1B ).

      A) Exposure to cold stress for 14� days induced a pronounced reduction in the number of spontaneously active VTA DA neurons (population activity black bar) compared to controls (white bar). This attenuation in population activity persisted when examined 7 days following removal from cold exposure (7d post CCE, hatched bar) (* one-way ANOVA, P< 0.05 see text for details). B) Chronic cold selectively decreased the number of spontaneously active DA neurons located in the medial (M) and central (C) but not in lateral (L) VTA of rats tested at 18� hours after chronic cold exposure (black squares) compared to controls (white circles) (* CTRL vs CCE: one-way ANOVA, medial, P= 0.039 central, P= 0.024). In rats tested 7 days post CCE there was no significant change in the DA neuron population located in any subdivision of the VTA (black diamonds). No significant changes were observed either in average firing rate (C) or in average percent of burst firing (D).

      Chronic exposure to cold did not significantly affect DA neuron average firing rate (CTRL: 4.0 ± 0.3Hz, n= 9 rats CCE: 3.3 ± 0.6Hz, n= 6 rats one-way ANOVA, P= 0.2828 Figure 1C ) or burst firing (CTRL: 23.7 ± 3.2%, n= 9 rats CCE: 23.3 ± 2.9%, n= 6 rats one-way ANOVA, P= 0.2935 Figure 1D ) at either post-exposure time point. Moreover, there was no difference in the distribution of percent spikes in bursts for neurons recorded in either exposure group when compared to control (18� hours: two-sample Kolmogorov-Smirnov test, P= 0.5044 7 days: two-sample Kolmogorov-Smirnov test, P= 0.3875).

      Thus, these data suggest that a maintained inescapable mild stressor, i.e. a 2 week chronic exposure to cold, produced prominent changes in VTA DA neuron population activity, which were still present in an attenuated fashion a week after cold removal. Moreover, similar to the effects observed with repetitive footshock (Valenti et al., 2011), chronic cold stress affected VTA DA neuron spontaneous activity differentially depending on the location within the VTA.

      Chronic cold exposure did not alter mPFC pyramidal neuron activity

      The PFC is a region known to regulate stress responses subcortically (Abercrombie et al., 1989 Finlay et al., 1995 Cabib & Puglisi-Allegra, 1996), and recent studies suggested that mPFC attenuates stress responses following a controllable stressor (Amat et al., 2008). Thus, the effects of CCE on mPFC pyramidal neurons were examined. In control rats, putative pyramidal neurons exhibited an average firing rate of 1.1 ± 0.2Hz (n= 7 rats, 34 neurons). Two weeks of exposure to cold did not significantly alter pyramidal neuron firing rate (CCE: 0.9 ± 0.1Hz, n= 7 rats, 26 neurons Kruskal-Wallis one-way ANOVA on Ranks, P= 0.704). In addition, chronic cold stress did not change mPFC pyramidal neuron percent burst firing (CTRL: 41.5 ± 3.0%, CCE: 40.0 ± 4.4% one-way ANOVA, P= 0.769) or the average interspike interval (CTRL: 16.6 ± 1 msec, CCE: 16.6 ± 1 msec Kruskal-Wallis one-way ANOVA on Ranks, P= 0.820). Thus, cold stress did not affect any of the parameters of mPFC pyramidal neuron activity measured.

      Previous exposure to chronic cold prevented the acute restraint-induced increase in VTA DA neuron population activity

      We have shown that restraint stress, given either acutely or repeatedly, increased DA neuron population activity (Valenti et al., 2011). Thus, the effect of restraint on DA neuron activity of untreated rats was opposite in direction from that found following chronic cold ( Figure 1 ). Therefore, we examined whether the previous exposure to chronic cold affected the restraint-induced increase in VTA DA neuron population activity or restraint-induced amphetamine cross-sensitization of locomotor activity. A two-way ANOVA was applied to examine the effects of CCE, AR and the interaction between CCE and AR, followed by Holm-Sidak method for multiple comparisons (source of variation: CCE, F(1,26)= 99.3, P< 0.001 AR, F(1,26)= 62.9, P< 0.001 CCE x AR, F(1,1,26)= 7.01, P= 0.0136). Thus, an acute restraint stress session induced a pronounced activation of VTA DA neuron population activity (CTRL vs AR: 1.9 ± 0.1, n= 8 rats, 135 DA neurons two-way ANOVA, F(1,26)= 99.3, P< 0.001 Figure 2A ), as previously reported (Valenti et al., 2011). Acute restraint stress also increased DA neuron population activity in rats pre-exposed to chronic cold compared to CCE alone, restoring the CCE-induced decrease in population activity toward control levels (CCE vs CCE + AR: 0.9 ± 0.1, n= 7 rats, n= 54 DA neurons two-way ANOVA, F(1,26)= 99.3, P< 0.001 Figure 2A ). However, the restraint-induced increase in DA neuron population activity observed in CCE rats was by far less pronounced than in AR alone (AR vs CCE + AR: two-way ANOVA, F(1,26)= 62.9, P< 0.001 Figure 2A ). In addition, there was a significant interaction among groups (CCE x AR: two-way ANOVA, F(1,1,26)= 7.01, P= 0.0136). Thus, pre-exposure to chronic cold appeared to protect the DA system from the effects of acute restraint stress.

      In examining the location of VTA DA neuron population change within the VTA, there were marked differences observed following the stress protocol. Thus, acute restraint increased DA neuron firing across medial, central, and lateral portions of the VTA (CTRL vs AR, two-way ANOVA: medial, F(1,21)= 4.5, P= 0.045 central, F(1,21)= 50.4, P< 0.001 lateral, F(1,21)= 8.1, P= 0.012 Figure 2B ), whereas chronic cold decreased population activity primarily in the medial and central VTA ( Figure 1B and ​ and2B). 2B ). In CCE animals subsequently exposed to acute restraint, the increase in medial VTA DA neuron firing remained (AR: 1.9 ± 0.2, n= 8 rats, 45 DA neurons CCE + AR: 1.6 ± 0.1, n= 7 rats, 33 DA neurons two-way ANOVA, P= 0.258), which corresponded to a 135.1% increase from CCE alone (CCE vs CCE + AR two-way ANOVA, F(1,21)= 13.4, P= 0.0014 Figure 2B ). In contrast, previous exposure to chronic cold prevented the prominent increase induced by restraint stress in central (AR: 1.8 ± 0.1, n= 8 rats, 44 DA neurons CCE + AR: 0.5 ± 0.1, n= 7 rats, 11 DA neurons two-way ANOVA, F(1,21)= 50.4, P< 0.001) and lateral VTA (AR: 1.9 ± 0.3, n= 7 rats, 40 DA neurons CCE + AR: 0.7 ± 0.1, n= 5 rats, 10 DA neurons two-way ANOVA, F(1,15)= 8.15, P= 0.012) with population activity being not significantly different from that observed following CCE alone (CCE vs CCE + AR: two-way ANOVA, central, P= 0.658 lateral, P= 0.659 Figure 2B ). In addition, a significant interaction was observed between the effects of cold and restraint stress only for DA neurons located in central VTA (two-way ANOVA, F(1,1,21)= 8.24, P= 0.0092). Therefore, CCE attenuated the ability of restraint stress to increase VTA neuron activity in regions of the VTA that project to more associative regions of the striatum, without affecting the increase in the reward-related medial VTA regions (Ikemoto, 2007 Lodge & Grace, 2011 Valenti et al., 2011).

      No significant change in either average firing rate (for CTRL and CCE see above AR: 4.3 ± 0.2Hz, n= 8 rats CCE + AR: 3.8 ± 0.1Hz, n= 7 rats two-way ANOVA, source of variation, CCE: P= 0.0841 AR, P= 0.2742) or average percent burst firing (for CTRL and CCE see above AR: 35.3 ± 4.1%, n= 7 rats CCE + AR: 22.8 ± 4.4%, n= 7 rats two-way ANOVA, source of variation, CCE: P= 0.0995 AR: P= 0.1526 Figure 2C ) was observed in any of the groups tested or when the interaction of the effect of the 2 stress protocols was examined (FR: two-way ANOVA, P= 0.8404 %B: two-way ANOVA, P= 0.1216). Given that acute restraint was shown to increase the average percent of burst firing in control rats (Valenti et al., 2011), we examined whether pre-exposure to chronic cold also prevented the restraint-induced increase in burst firing observed in untreated rats ( Figure 2C ). Further analysis revealed that pre-exposure to cold stress altered the distribution in percent burst firing, with many more neurons showing low levels of burst discharge following chronic cold + acute restraint (AR vs CCE + AR two-sample Kolmogorov-Smirnov test, P= 0.0424 Figure 2D ).

      Effects of chronic exposure to cold on amphetamine-induced locomotor activity

      Previous studies from our laboratory suggest that the increased level of VTA DA neuron population activity correlates with the increased locomotor response to amphetamine (Lodge & Grace, 2008 Valenti et al., 2011). Given that CCE induced a pronounced reduction of DA neuron population activity, and that CCE attenuates the electrophysiological response to restraint stress, this relationship was examined behaviorally. Thus, both spontaneous and amphetamine-induced locomotor activity was recorded in 4 groups of rats: control, chronic cold rats, restraint rats, chronic cold + restraint rats. Baseline locomotor activity was recorded for 30 min and measured in separated open-field arenas (Coulborne Instruments). Rats were then removed from the arenas and injected i.p. with 0.5 mg/kg amphetamine. A three-way ANOVA was applied to analyze the effects of CCE, AR, time or their interactions, and all groups were compared (source of variation: CCE: F(1,864)= 4.275, P= 0.039 AR: F(1,864)= 1.485, P= 0.223 time: F(23,864)= 12.962, P< 0.001 for interactions: CCE x AR: F(1,1,864)= 0.905, P= 0.342 CCE x time, F(1,23,864)= 0.47, P= 0.985 AR x time, F(1,23,864)= 3.137, P< 0.001 CCE x AR x time, F(1,1,23,864)= 0.426, P= 0.992). In addition, a pairwise multiple comparison procedure (Holm-Sidak method) was applied following the ANOVA to compare these factors. Thus, administration of amphetamine to CCE rats induced a rapid and transient increase in locomotor activation compared to matched controls for the first 5 min from drug administration (CTRL, n= 8 rats vs CCE, n= 9 rats three-way ANOVA, P= 0.040 Figure 3 ) however, locomotor activity of CCE rats was slightly but not significantly lower than that of controls in the subsequent time points. Consistent with our previous study (Valenti et al., 2011), 0.5 mg/kg amphetamine induced a pronounced increase in locomotor activity of AR rats compared to controls during the first 15 min post-drug (CTRL, n= 8 rats vs AR, n= 12 rats three-way ANOVA, 35min: P< 0.001 40min: P= 0.042 45min: P= 0.003 50min: P= 0.026). In contrast, in rats that received acute restraint on the day after cold removal, chronic cold stress failed to affect significantly the acute restraint-induced increase in the locomotor response to amphetamine (AR, n=12 rats vs CCE+AR n= 11 rats three-way ANOVA, P= 0.374 Figure 3B ).

      Thus, the decrease in DA neuron population activity observed in the CCE rats was not found to correlate with a decrease in the amplitude of amphetamine-induced locomotion. Moreover, 2 weeks continuous exposure to cold attenuated the restraint stress-induced increase in DA neuron population activity but did not affect restraint-induced behavioral activation to amphetamine.


      Neuronal Firing Rate As Code Length: a Hypothesis

      Many theories assume that a sensory neuron’s higher firing rate indicates a greater probability of its preferred stimulus. However, this contradicts (1) the adaptation phenomena where prolonged exposure to, and thus increased probability of, a stimulus reduces the firing rates of cells tuned to the stimulus and (2) the observation that unexpected (low probability) stimuli capture attention and increase neuronal firing. Other theories posit that the brain builds predictive/efficient codes for reconstructing sensory inputs. However, they cannot explain that the brain preserves some information while discarding other. We propose that in sensory areas, projection neurons’ firing rates are proportional to optimal code length (i.e., negative log estimated probability), and their spike patterns are the code, for useful features in inputs. This hypothesis explains adaptation-induced changes of V1 orientation tuning curves and bottom-up attention. We discuss how the modern minimum-description-length (MDL) principle may help understand neural codes. Because regularity extraction is relative to a model class (defined by cells) via its optimal universal code (OUC), MDL matches the brain’s purposeful, hierarchical processing without input reconstruction. Such processing enables input compression/understanding even when model classes do not contain true models. Top-down attention modifies lower-level OUCs via feedback connections to enhance transmission of behaviorally relevant information. Although OUCs concern lossless data compression, we suggest possible extensions to lossy, prefix-free neural codes for prompt, online processing of most important aspects of stimuli while minimizing behaviorally relevant distortion. Finally, we discuss how neural networks might learn MDL’s normalized maximum likelihood (NML) distributions from input data.

      This is a preview of subscription content, access via your institution.


      Michelle's Psychology Blog

      Classical Conditioning: is passive learning, when its automatic the learner doesn't have to think about its actions.

      Unconditioned stimulus: is something that causes an automatic natural reaction.

      Unconditioned Response: A natural, usually unvarying response evoked by a stimulus in the absence of learning or conditioning.

      Neutral Stimulus: is one that elicits no response.

      Acquisition: refers to the first stages of learning when a response is established.

      Extinction: refers to the gradual weakening of a conditioned response that results in the behavior decreasing or disappearing.


      Garcia and Koelling Study: Fed rats sweet liquid followed by an injection which made them sick, then the rats avoided sweet liquid. (Taste Aversion)


      Operant Conditioning: is a type of learning in which an individual's behavior is modified by its antecedents and consequences.

      The Law of Effect: Edward Thorndike placed cats in puzzle boxes, the cats learned to pull the lever to come out of the box to a reward of fish. A ny behavior that is followed by pleasant consequences is likely to be repeated, and any behavior followed by unpleasant consequences is likely to be stopped.

      BF Skinner: Believed the best way to understand behavior is to look at the causes of an action and its consequences. (Operant Conditioning)


      Reinforcer: anything that increases a behavior. There's positive reinforcement, which is the addition of something good, and Negative reinforcement, the removal of something unpleasant.

      Shaping: the process of reinforcing a specific behavior that a person is trying to install within their subject.

      Chaining Behaviors: Subjects are taught a number of responses in succession to get a reward.

      Primary Reinforcers: things which are in themselves rewarding.
      Secondary Reinforcers: things we have learned to value (money is a generalized reinforcer)


      Token Economy: is a system of behavior modification based on the systematic reinforcement of target behavior.

      Premack Principle: Taking into consideration the reinforcers used, the reinforcer wanted.

      Reinforcement Schedule: How often you use the reinforcer.
      Fixed Ratio: reinforcement after a set number of responses.
      Variable Ratio: Reinforcement after a RANDOM number of responses.

      Observational Learning: Albert Bandura, we learn through modeling behavior from others.

      Latent learning: 3 rat experiment, Edward Toleman, Latent means hidden.



      Minimum constant neuron firing rate - Psychology

      Neurons use sterotyped, pulsed signals called action potentials to signal over long distances. Since the shape of the action potential is believed to carry minimal information, the time of arrival of the action potential carries all the information. Often the detailed voltage waveform is abstracted into a stream of binary events where most of the stream represents no action potential occurence ('zeros') and an isolated '1' represents an action potential. The binary waveform is refered to as a spike train.

      • A neuron could signal one value by rate, another by the variance in the rate, and another by skew in the rate.
      • Relative timing in two neurons might signal intensity or phase or causality.
      • Two neurons could signal four different conditions if their spikes were treated as binary words. Synchrony would be important.
      • The order of occurance of specific interspike intervals could modulate a synapse in a specific fashion.

      Since spike trains are sending messages to synapes (typically), it might be useful to ask how we can interpert a spike train in a way which makes synaptic sense. A first approximation might ask: Over what time scale does the synapse integrate information? If the integration time is very short, then only events which are close together matter, so synchrony in a code is a useful concept. On the other hand, if integration time is long, then the total number of spikes matters, so a rate code is a useful concept. Perhaps we should analyse spike trains on every time scale. In addition to time scale, we could analyse trains in several ways, each corresponding to a different (measured, or hyopthesized) aspect of synaptic function. For instance activation and learning might have different effective coding from the same spike train.

      We want to talk about schemes for finding patterns in spike trains, Bruce will descibe patterns in EOD discharge data and I will describe some theoretical techniques which seem to hold promise in their flexibility and potential synaptocentric (love those neologisms) orientation. The thread between them is the use of methods that avoid some problems with traditional binning methods. Specifically, we will consider schemes that treat the spike time as the most important variable.

      • relating neural activity to stimuli
      • trying to find repetitive patterns in a motor discharge
      • relationship between decending neuronal activity and motor output (Bruce Carlson)
      • looking for codes distrubuted across a neuron population
      • functional interactions among neurons
      • Post stimulus time histogram -- bin the spikes into time slots referenced to the stimulus.
      • Spike density function (3) and other digital filters (4)
      • Auto- and cross-correlation functions of the entire train using spike density function to smooth
      • ISI distrubutions and return maps (3)
      • Spike-triggered average (5)
      • Spike-centered distance analysis (1), (2)

      The techniques we will explore here are based on reference (1), (2)and (3). These avoid problems with binning in time and allow the flexibility of imposing synaptic interpretations on the train. Bruce has talked about how convolving a train with an Gaussian distribution allows a smoother, more relable estimate of burst parameters. What we want to do now is describe how spike trains might be compared to each other for similarity.

      To set the stage, let us consider a pure rate code. Each train is then specified by its rate, and the difference between trains is the difference of their rates. We could say the the distance separating the trains is the difference of their rates, or that rate differnce is a distance measure between spike trains. What we would like is a distance measure which works on any time scale. Then we could speak of the distance between two trains with a given time precision. Such a distance measure would allow us to smoothly go from strict synchronization codes (very small spike time difference allowed) to rate codes (large time differences allowed within an overall interval). The time resolution at which you analyse the train might alsocorrespond to the time constant of the synapse.

      Two recent papers describe techniques for computing the distance between two spike trains at any time resolution. Victor's paper (1) define the distance between two spike trains in terms of the minimum cost of transforming one train into another. Only three transforming operations are allowed move a spike, add a spike, or delete a spike. The cost of addition or deletion is set at one. The cost of moving a spike by a small time is a parameter which sets the time scale of the analysis. If the cost of motion is very small then you can slide a spike anywhere but it still costs to insert or delete a spike, so the distance between two trains is just the difference in number of spikes. This is similar to a rate code distance. If the cost of motion is very high then any spike is a different spike,so the minimum costs become the cost insert or delete all the spikes. The distance between two trains becomes approximately the number of spikes in the train which are not exactly alligned, sort of a measure of synchrony. Intermediate values of cost interpolate smoothly between perfect synchrony and a rate code.

      Rossum's paper (2) computes a distance which is closely related to Victor's distance, but much easier to implement and easier to explain. Each spike train is convolved with an exponential function:

      Where ti is the time of occurence of the ith spike. You get to choose the time constant tc of the exponential, which sets the time scale of the distance measurment. Call the convolved waveforms f(t) and g(t). You then form the distance as:

      Where dt is the spike sampling time step. This distance could be considered as an approximate differnce between two post-synaptic current sequences triggered by the respective spike trains because such currents tend to have approximately exponential shape. In a sense, the Rossum distance measures the difference in the effect of the two trains on their respective synapses (to a very crude approximation).

      The Rossum time scale, tc, and Victor's cost parameter are related by

      We can compare the two distance measures over a wide range of time scales using this reciprocal relation. Two of the matlab programs given below do this comparison, but an example here might help. The following image shows two spike trains. The blue train is regular and the red train is a gaussian dithered verson of the blue train. At small time scales, the Victor distance is 8 because 4 spikes are not exactly aligned. This means that 4 spike deletions and 4 insertions are necessary to transform one into the other. At time scales comparable to the dither (in this case std.dev.=10 mSec), the Victor distance starts to drop because it is cheaper to move a spike. At long time scales the Victor distance goes to zero because both trains have the same number of spikes. The Rossum distance falls more smoothly because it depends on a smooth exponential weighting function. The distance at short time scales is similar because the exponentials have essentially fallen to zero. At large time scales the Rossum distance also goes to zero, because it too measures the total number of spikes. Which distance you decide to use depends on how you think the spike train is interpreted post-synaptically. Note that the distance is computed at all possible time scales so that different criteria of synchrony are automatically available.

      Once we can compute distances between spike trains, we can try out several techniques outlined in reference (1). The following assumes that trains are caused by some controlled stimulus, and can therefore be a-priori catagorized by the stimulus type. These steps have been implemented in the software described below.

      • Information estimate. How well do the a-priori types and actual trains match up?
        • Compute the pair-wise distances between all recorded trains (at some specified time scale).
        • For each train:
          • Compute the average distance to each a-priori stimulus group (including the group of the current train)
          • Select the group with the minimum average distance as the effective group to which the train belongs.

          Uses of a spike train distance measure:

          • Song length and time scale analysis for cricket song (7). The authors conclude that the time scale of analysis is 10 mSec and about 2 sylables of the song are necessary for full separation of songs.
          • Function of the visual cortex (8).
          • Burst similiarity in a long train. See below where we take a burst located by Bruce Carlson'sconvolution method and slide it along the spike train looking for minimum distance fits.
          • Regular and dithered spike trains.code. An example image is given above.
          • Currents applied to integrate-and-fire neurons (with adaption). code. The first figure below shows the simulated voltage, current and spiketrains from two groups of 5 IF neurons in response to a square current pulse and the same pulse dithered with gaussian noise. The second figure shows the two trains which fire during the stimulus and the resulting Rossum distances, taken pairwise between every possible pair of spike trains (45 total). Red indicates one group, blue the other, and black the cross-distance. Summing the distances over all timescales and plotting them by group shows that the groups clump by distance. For comparison, the simple time bin histogram distance is shown at the bottom.

          • Computing the bit rate from distance clusters. code. The first figure below shows the simulated voltage, current and spiketrains from four groups of 5 IF neurons in response to a square current pulse modulated with 4 different amplitudes of sinewave. The image below shows the current, 20 spike trains (color coded) and the resulting information as a funciton of scale. The peak information is about 1 bit, implying fairly poor discrumination of the 4 groups of spike trains. At large scales the bitrate drops, implying that average spike rate carries little information.
          • Multidimensional scaling and temporal profiling. This code is modified from (6) to include 3D plotting and the temporal profiling procedure.To run, it also needs the programs in this ZIP file, which is from (6). The call to fminu must be modified to call fminunc . The fist image shows the optimal imbedding of the train distances in 3D (data from previous image). The numbers correspond to individual trains and the colors to the 4 groups. The second image is the temporal profile for this 3-space using 5 time-bins for each spike train. It shows that dimension 1 is coding a weighted sum so that high values aling the axis mean a low initial firing rate, then a high rate, followed again by a low rate. The second dimension is encoding a temporal pattern of approximately the first derivitive of dimension 1.
          • Find a burst by minimizing the Rossum distance. code.The figure below shows a EOD train (fist panel)with one burst isolated (by hand in second panel). The summed distance is a simple sum of log-distributed time scale distances(third panel). The bottom panel shows a burst (the one starting at about 13.8 sec) aligned by using the minimum distance (about 170). A separate fit to the burst at abou 9.35 seconds (clasified by Bruce as a different type) shows a minimum distance of about 200. The burst at about 18.7 gives a best fit distance of about 150. A random chunk of train at about 6 seconds gives a minimum distance of 280. Another run scanning from 2 seconds to 19 seconds found all the bursts with distances around 200.Of course, the reference burst gave a distance of zero.

            - Firing pattern analysis in Windoze including ISI, return map, firing rate estimation, and correlation (implements the techniques in (3)).. - Time series analysis - Spike Interchange File Format
        • Signal Processing Techniques for Spike Train Analysis using MatLab - These M-files implement the analysis procedures discussed in chapter 9 of "Methods in Neuronal Modeling".
        • NTSA Workbench - Neuronal Time Series Analysis (NTSA) Workbench is a set of tools, techniques and standards designed to meet the needs of neuroscientists who work with neuronal time series data.
        • NeuroExplorer -- Flexible train analysis


          Exact firing time statistics of neurons driven by discrete inhibitory noise

          Neurons in the intact brain receive a continuous and irregular synaptic bombardment from excitatory and inhibitory pre- synaptic neurons, which determines the firing activity of the stimulated neuron. In order to investigate the influence of inhibitory stimulation on the firing time statistics, we consider Leaky Integrate-and-Fire neurons subject to inhibitory instantaneous post- synaptic potentials. In particular, we report exact results for the firing rate, the coefficient of variation and the spike train spectrum for various synaptic weight distributions. Our results are not limited to stimulations of infinitesimal amplitude, but they apply as well to finite amplitude post-synaptic potentials, thus being able to capture the effect of rare and large spikes. The developed methods are able to reproduce also the average firing properties of heterogeneous neuronal populations.


          Michelle's Psychology Blog

          Classical Conditioning: is passive learning, when its automatic the learner doesn't have to think about its actions.

          Unconditioned stimulus: is something that causes an automatic natural reaction.

          Unconditioned Response: A natural, usually unvarying response evoked by a stimulus in the absence of learning or conditioning.

          Neutral Stimulus: is one that elicits no response.

          Acquisition: refers to the first stages of learning when a response is established.

          Extinction: refers to the gradual weakening of a conditioned response that results in the behavior decreasing or disappearing.


          Garcia and Koelling Study: Fed rats sweet liquid followed by an injection which made them sick, then the rats avoided sweet liquid. (Taste Aversion)


          Operant Conditioning: is a type of learning in which an individual's behavior is modified by its antecedents and consequences.

          The Law of Effect: Edward Thorndike placed cats in puzzle boxes, the cats learned to pull the lever to come out of the box to a reward of fish. A ny behavior that is followed by pleasant consequences is likely to be repeated, and any behavior followed by unpleasant consequences is likely to be stopped.

          BF Skinner: Believed the best way to understand behavior is to look at the causes of an action and its consequences. (Operant Conditioning)


          Reinforcer: anything that increases a behavior. There's positive reinforcement, which is the addition of something good, and Negative reinforcement, the removal of something unpleasant.

          Shaping: the process of reinforcing a specific behavior that a person is trying to install within their subject.

          Chaining Behaviors: Subjects are taught a number of responses in succession to get a reward.

          Primary Reinforcers: things which are in themselves rewarding.
          Secondary Reinforcers: things we have learned to value (money is a generalized reinforcer)


          Token Economy: is a system of behavior modification based on the systematic reinforcement of target behavior.

          Premack Principle: Taking into consideration the reinforcers used, the reinforcer wanted.

          Reinforcement Schedule: How often you use the reinforcer.
          Fixed Ratio: reinforcement after a set number of responses.
          Variable Ratio: Reinforcement after a RANDOM number of responses.

          Observational Learning: Albert Bandura, we learn through modeling behavior from others.

          Latent learning: 3 rat experiment, Edward Toleman, Latent means hidden.



          Mechanisms¶ &uarr

          Warning

          The THREADSAFE mechanism case is a bit more complicated if the mechanism anywhere assigns a value to a GLOBAL variable. When the user explicitly specifies that a mechanism is THREADSAFE, those GLOBAL variables that anywhere appear on the left hand side of an assignment statement (and there is no such assignment with the PROTECT prefix) are actually thread specific variables. Hoc access to thread specific global variables is with respect to a static instance which is shared by the first thread in which mechanism actually exists.

          capacitance

          Syntax:

          section.cm (uF/cm2)

          section.i_cap (mA/cm2)

          Description: capacitance is a mechanism that automatically is inserted into every section. cm is a range variable with a default value of 1.0. i_cap is a range variable which contains the varying membrane capacitive current during a simulation. Note that i_cap is most accurate when a variable step integration method is used. Syntax: section.insert('hh') Description:

          See <nrn src dir>/src/nrnoc/hh.mod

          Hodgkin-Huxley sodium, potassium, and leakage channels. Range variables specific to this model are:

          This model used the na and k ions to read ena, ek and write ina, ik.

          See <nrn src dir>/src/nrnoc/passive0.c

          Passive membrane channel. Same as the pas mechanism but hand coded to be a bit faster (avoids the wasteful numerical derivative computation of the conductance and does not save the current). Generally not worth using since passive channel computations are not usually the rate limiting step of a simulation.

          extracellular

          Syntax:

          section.insert('extracellular')

          .vext[2] -- mV

          .i_membrane -- mA/cm2

          .xraxial[2] -- MOhms/cm

          .xg[2]         -- mho/cm2

          .xc[2]         -- uF/cm2

          .extracellular.e -- mV

          Description:

          Adds two layers of extracellular field to the section. Vext is solved simultaneously with the v. When the extracellular mechanism is present, v refers to the membrane potential and vext (i.e. vext[0]) refers to the extracellular potential just next to the membrane. Thus the internal potential is v+vext (but see BUGS).

          This mechanism is useful for simulating the stimulation with extracellular electrodes, response in the presence of an extracellular potential boundary condition computed by some external program, leaky patch clamps, incomplete seals in the myelin sheath along with current flow in the space between the myelin and the axon. It is required when connecting LinearMechanism (e.g. a circuit built with the NEURON Main Menu ‣ Build ‣ Linear Circuit ) to extracellular nodes.

          i_membrane correctly does not include contributions from ELECTRODE_CURRENT point processes.

          See i_membrane_ at CVode.use_fast_imem() .

          The figure illustrates the form the electrical equivalent circuit when this mechanism is present. Note that previous documentation was incorrect in showing that e_extracellular was in series with the xg[nlayer-1],xc[nlayer-1] parallel combination. In fact it has always been the case that e_extracellular was in series with xg[nlayer-1] and xc[nlayer-1] was in parallel with that series combination.

          Note

          The only reason the standard distribution is built with nlayer=2 is so that when only a single layer is needed (the usual case), then e_extracellular is consistent with the previous documentation with the old default nlayer=1.

          e_extracellular is connected in series with the conductance of the last extracellular layer. With two layers the equivalent circuit looks like:

          Extracellular potentials do a great deal of violence to one’s intuition and it is important that the user carefully consider the results of simulations that use them. It is best to start out believing that there are bugs in the method and attempt to prove their existence.

          See <nrn src dir>/src/nrnoc/extcell.c and <nrn src dir>/examples/nrnoc/extcab*.hoc.

          NEURON can be compiled with any number of extracellular layers. See below.

          Warning

          xcaxial is also defined but is not implemented. If you need those then add them with the LinearMechanism .

          Prior versions of this document indicated that e_extracellular is in series with the parallel (xc,xg) pair. In fact it was in series with xg of the layer. The above equivalent circuit has been changed to reflect the truth about the implementation.

          In v4.3.1 2000/09/06 and before vext(0) and vext(1) are the voltages at the centers of the first and last segments instead of the zero area nodes.

          Now the above bug is fixed and vext(0) and vext(1) are the voltages at the zero area nodes.

          From extcell.c the comment is:

          In v4.3.1 2000/09/06 and before extracellular layers will not be connected across sections unless the parent section of the connection contains the extracellular mechanism. This is because the 0 area node of the connection is “owned” by the parent section. In particular, root nodes never contain extracellular mechanisms and thus multiple sections connected to the root node always appear to be extracellularly disconnected. This bug has been fixed. However it is still the case that vext(0) can be non-zero only if the section owning the 0 node has had the extracellular mechanism inserted. It is best to have every section in a cell contain the extracellular mechanism if any one of them does to avoid confusion with regard to (the in fact correct) boundary conditions.


          We want to express our gratefulness to Professor Julio-Cesar Martinez-Trujillo and his research group at Western University in London Canada, for valuable discussions.

          Bartsch, M. V., Loewe, K., Merkel, C., Heinze, H. J., Schoenfeld, M. A., Tsotsos, J. K., et al. (2017). Attention to color sharpens neural population tuning via feedback processing in the human visual cortex hierarchy. J. Neurosci. 37, 10346�. doi: 10.1523/JNEUROSCI.0666-17.2017

          Bressler, S. L., Tang, W., Sylvester, C. M., Shulman, G. L., and Corbetta, M. (2008). Top-down control of human visual cortex by frontal and parietal cortex in anticipatory visual spatial attention. J. Neurosci. 28, 10056�. doi: 10.1523/JNEUROSCI.1776-08.2008

          Buschman, T. J., and Kastner, S. (2015). From behavior to neural dynamics: an integrated theory of attention. Neuron 88, 127�. doi: 10.1016/j.neuron.2015.09.017

          Buschman, T. J., and Miller, E. K. (2007). Top-down versus bottom-up control of attention in the prefrontal and posterior parietal cortices. Science 315, 1860�. doi: 10.1126/science.1138071

          Busse, L., Katzner, S., and Treue, S. (2008). Temporal dynamics of neuronal modulation during exogenous and endogenous shifts of visual attention in macaque area MT. Proc. Natl. Acad. Sci. U.S.A. 105, 16380�. doi: 10.1073/pnas.0707369105

          Carrasco, M. (2011). Visual attention: the past 25 years. Vision Res. 51, 1484�. doi: 10.1016/j.visres.2011.04.012

          Cavelier, P., Hamann, M., Rossi, D., Mobbs, P., and Attwell, D. (2005). Tonic excitation and inhibition of neurons: ambient transmitter sources and computational consequences. Prog. Biophys. Mol. Biol. 87, 3�. doi: 10.1016/j.pbiomolbio.2004.06.001

          Corbetta, M., Miezin, F. M., Dobmeyer, S., Shulman, G. L., and Petersen, S. E. (1991). Selective and divided attention during visual discriminations of shape, color, and speed: functional anatomy by positron emission tomography. J. Neurosci. 11, 2383�.

          Corbetta, M., and Shulman, G. L. (2002). Control of goal-directed and stimulus-driven attention in the brain. Nat. Rev. Neurosci. 3, 201�. doi: 10.1038/nrn755

          Cutzu, F., and Tsotsos, J. K. (2003). The selective tuning model of attention: psychophysical evidence for a suppressive annulus around an attended item. Vision Res. 43, 205�. doi: 10.1016/S0042-6989(02)00491-1

          Dayan, P., and Abbott, L. F. (2001). Theoretical Neuroscience. Cambridge, MA: MIT Press.

          Deco, G., and Lee, T. S. (2002). A unified model of spatial and object attention based on inter-cortical biased competition. Neurocomputing 44, 775�. doi: 10.1016/S0925-2312(02)00471-X

          Desimone, R., and Duncan, J. (1995). Neural mechanisms of selective visual attention. Annu. Rev. Neurosci. 18, 193�. doi: 10.1146/annurev.ne.18.030195.001205

          Destexhe, A., Mainen, Z. F., and Sejnowski, T. J. (1998). Kinetic models of synaptic transmission. Methods Neuronal Modeling 2, 1�.

          Dorval, A. D., and White, J. A. (2005). Channel noise is essential for perithreshold oscillations in entorhinal stellate neurons. J. Neurosci. 25, 10025�. doi: 10.1523/JNEUROSCI.3557-05.2005

          Eagleman, D. M., and Sejnowski, T. J. (2000). Motion integration and postdiction in visual awareness. Science 287, 2036�. doi: 10.1126/science.287.5460.2036

          Fallah, M., Stoner, G. R., and Reynolds, J. H. (2007). Stimulus-specific competitive selection in macaque extrastriate visual area V4. Proc. Natl. Acad. Sci. U.S.A. 104, 4165�. doi: 10.1073/pnas.0611722104

          Ferrera, V. P., Rudolph, K. K., and Maunsell, J. H. (1994). Responses of neurons in the parietal and temporal visual pathways during a motion task. J. Neurosci. 14, 6171�.

          Hodgkin, A. L., and Huxley, A. F. (1952). A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 117, 500�. doi: 10.1113/jphysiol.1952.sp004764

          Hopf, J. M., Boehler, C. N., Luck, S. J., Tsotsos, J. K., Heinze, H. J., and Schoenfeld, M. A. (2006). Direct neurophysiological evidence for spatial suppression surrounding the focus of attention in vision. Proc. Natl. Acad. Sci. U.S.A. 103, 1053�. doi: 10.1073/pnas.0507746103

          Hutt, A. (2012). The population firing rate in the presence of GABAergic tonic inhibition in single neurons and application to general anaesthesia. Cogn. Neurodyn. 6, 227�. doi: 10.1007/s11571-011-9182-9

          Itti, L. (2005). Models of bottom-up attention and saliency. Neurobiol. Attent. 582, 576�. doi: 10.1016/B978-012375731-9/50098-7

          Itti, L., and Koch, C. (2001). Computational modelling of visual attention. Nat. Rev. Neurosci. 2, 194�. doi: 10.1038/35058500

          Itti, L., Rees, G., and Tsotsos, J. K. (2005). Neurobiology of Attention. Waltham, MA: Elsevier/Academic Press.

          Izhikevich, E. M. (2004). Which model to use for cortical spiking neurons? IEEE Trans. Neural Netw. 15, 1063�. doi: 10.1109/TNN.2004.832719

          James, W. (1891). The Principles of Psychology, Vol. 2, London, UK: Macmillan.

          Jensen, O., Goel, P., Kopell, N., Pohja, M., Hari, R., and Ermentrout, B. (2005). On the human sensorimotor-cortex beta rhythm: sources and modeling. Neuroimage 26, 347�. doi: 10.1016/j.neuroimage.2005.02.008

          Kandel, E. R., Schwartz, J. H., Jessell, T. M., Siegelbaum, S. A., and Hudspeth, A. (2000). Principles of Neural Science. New York, NY: McGraw-hill.

          Khayat, P. S., Niebergall, R., and Martinez-Trujillo, J. C. (2010). Attention differentially modulates similar neuronal responses evoked by varying contrast and direction stimuli in area MT. J. Neurosci. 30, 2188�. doi: 10.1523/JNEUROSCI.5314-09.2010

          Koch, C., and Ullman, S. (1987). “Shifts in selective visual attention: towards the underlying neural circuitry,” in Matters of Intelligence, ed L. M. Vaina (Dordrecht: Springer), 115�.

          Kosai, Y., El-Shamayleh, Y., Fyall, A. M., and Pasupathy, A. (2014). The role of visual area V4 in the discrimination of partially occluded shapes. J. Neurosci. 34, 8570�. doi: 10.1523/JNEUROSCI.1375-14.2014

          Ladenbauer, J., Augustin, M., and Obermayer, K. (2014). How adaptation currents change threshold, gain, and variability of neuronal spiking. J. Neurophysiol. 111, 939�. doi: 10.1152/jn.00586.2013

          Lee, J., and Maunsell, J. H. (2009). A normalization model of attentional modulation of single unit responses. PLoS ONE 4:e4651. doi: 10.1371/journal.pone.0004651

          Lennert, T., and Martinez-Trujillo, J. (2011). Strength of response suppression to distracter stimuli determines attentional-filtering performance in primate prefrontal neurons. Neuron 70, 141�. doi: 10.1016/j.neuron.2011.02.041

          Loach, D. P., Tombu, M., and Tsotsos, J. K. (2005). Interactions between spatial and temporal attention: an attentional blink study. J. Vis. 5:109. doi: 10.1167/5.8.109

          Luck, S. J., Chelazzi, L., Hillyard, S. A., and Desimone, R. (1997). Neural mechanisms of spatial selective attention in areas V1, V2, and V4 of macaque visual cortex. J. Neurophysiol. 77, 24�. doi: 10.1152/jn.1997.77.1.24

          Martinez-Trujillo, J. C., and Treue, S. (2004). Feature-based attention increases the selectivity of population responses in primate visual cortex. Curr. Biol. 14, 744�. doi: 10.1016/j.cub.2004.04.028

          McAdams, C. J., and Maunsell, J. H. (1999). Effects of attention on orientation-tuning functions of single neurons in macaque cortical area V4. J. Neurosci. 19, 431�.

          Niebur, E., and Koch, C. (1994). A model for the neuronal implementation of selective visual attention based on temporal correlation among neurons. J. Comput. Neurosci. 1, 141�. doi: 10.1007/BF00962722

          Oliva, A., Torralba, A., Castelhano, M. S., and Henderson, J. M. (2003). “Top-down control of visual attention in object detection,” in Image Processing, 2003. ICIP 2003. Proceedings. 2003 International Conference on: IEEE (Barcelona), I253–I256.

          Pape, H. C. (1996). Queer current and pacemaker: the hyperpolarization-activated cation current in neurons. Annu. Rev. Physiol. 58, 299�. doi: 10.1146/annurev.ph.58.030196.001503

          Pestilli, F., Viera, G., and Carrasco, M. (2007). How do attention and adaptation affect contrast sensitivity? J. Vis. 7:9. doi: 10.1167/7.7.9

          Posner, M. I. (2011). Cognitive Neuroscience of Attention. New York, NY: Guilford Press.

          Rao, S. C., Rainer, G., and Miller, E. K. (1997). Integration of what and where in the primate prefrontal cortex. Science 276, 821�. doi: 10.1126/science.276.5313.821

          Reynolds, J. H., Chelazzi, L., and Desimone, R. (1999). Competitive mechanisms subserve attention in macaque areas V2 and V4. J. Neurosci. 19, 1736�.

          Reynolds, J. H., and Heeger, D. J. (2009). The normalization model of attention. Neuron 61, 168�. doi: 10.1016/j.neuron.2009.01.002

          Rothenstein, A. L., and Tsotsos, J. K. (2014). Attentional modulation and selection𠄺n integrated approach. PLoS ONE 9:e99681. doi: 10.1371/journal.pone.0099681

          Rutishauser, U., Walther, D., Koch, C., and Perona, P. (2004). “Is bottom-up attention useful for object recognition?,” in Computer Vision and Pattern Recognition, 2004. CVPR 2004, Proceedings of the 2004 IEEE Computer Society Conference on: IEEE (Washington, DC), II37–II44.

          Shriki, O., Sompolinsky, H., and Hansel, D. (2003). Rate models for conductance based cortical neuronal networks. Neural Comput. 15, 1809�. doi: 10.1162/08997660360675053

          Spratling, M. W., and Johnson, M. H. (2004). A feedback model of visual attention. J. Cogn. Neurosci. 16, 219�. doi: 10.1162/089892904322984526

          Tsotsos, J. K. (1990). Analyzing vision at the complexity level. Behav. Brain Sci. 13, 423�. doi: 10.1017/S0140525X00079577

          Tsotsos, J. K. (2011). A Computational Perspective on Visual Attention. Cambridge: MIT Press.

          Tsotsos, J. K., Culhane, S. M., Wai, W. Y. K., Lai, Y., Davis, N., and Nuflo, F. (1995). Modeling visual attention via selective tuning. Artif. Intell. 78, 507�. doi: 10.1016/0004-3702(95)00025-9

          Tsotsos, J., and Rothenstein, A. (2011). Computational models of visual attention. Scholarpedia 6:6201. doi: 10.4249/scholarpedia.6201

          van Aerde, K. I., Mann, E. O., Canto, C. B., Heistek, T. S., Linkenkaer-Hansen, K., Mulder, A. B., et al. (2009). Flexible spike timing of layer 5 neurons during dynamic beta oscillation shifts in rat prefrontal cortex. J. Physiol. 587(Pt 21), 5177�. doi: 10.1113/jphysiol.2009.178384

          Whittington, M. A., Traub, R. D., Kopell, N., Ermentrout, B., and Buhl, E. H. (2000). Inhibition-based rhythms: experimental and mathematical observations on network dynamics. Int. J. Psychophysiol. 38, 315�. doi: 10.1016/S0167-8760(00)00173-2

          Wiesenfeld, K., and Moss, F. (1995). Stochastic resonance and the benefits of noise: from ice ages to crayfish and SQUIDs. Nature 373, 33�. doi: 10.1038/373033a0

          Williford, T., and Maunsell, J. H. (2006). Effects of spatial attention on contrast response functions in macaque area V4. J. Neurophysiol. 96, 40�. doi: 10.1152/jn.01207.2005

          Wilson, H. R. (1999). Spikes, Decisions, and Actions: the Dynamical Foundations of Neurosciences, Don Mills, ON: Oxford University Press, Inc.

          Keywords: visual attention, single cell, ST-neuron, firing rate, neural selectivity

          Citation: Avella Gonzalez OJ and Tsotsos JK (2018) Short and Long-Term Attentional Firing Rates Can Be Explained by ST-Neuron Dynamics. Front. Neurosci. 12:123. doi: 10.3389/fnins.2018.00123

          Received: 11 August 2017 Accepted: 15 February 2018
          Published: 02 March 2018.

          Xavier Otazu, Universitat Autònoma de Barcelona, Spain

          Keith Schneider, University of Delaware, United States
          Jihyun Yeonan-Kim, San Jose State University, United States

          Copyright © 2018 Avella Gonzalez and Tsotsos. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.


          1. Introduction

          Spiking neural networks that emulate neural ensembles have been studied extensively within the context of dynamical systems (Izhikevich, 2007), and modeled as a set of differential equations that govern the temporal evolution of its state variables. For a single neuron, the state variables are usually its membrane potential and the conductances of ion channels that mediate changes in the membrane potential via flux of ions across the cell membrane. A vast body of literature, ranging from the classical Hodgkin-Huxley model (Hodgkin and Huxley, 1952), FitzHugh-Nagumo model (FitzHugh, 1961), Izhikevich model (Izhikevich, 2003) to simpler integrate-and-fire models (Abbott, 1999), treats the problem of single-cell excitability at various levels of detail and biophysical plausibility. Individual neuron models are then connected through synapses, bottom-up, to form large-scale spiking neural networks.

          An alternative to this bottom-up approach is a top-down approach that treats the process of spike generation and neural representation of excitation in the context of minimizing some measure of network energy. The rationale for this approach is that physical processes occurring in nature have a tendency to self-optimize toward a minimum-energy state. This principle has been used to design neuromorphic systems where the state of a neuron in the network is assumed to be either binary in nature (spiking or not spiking) (Jonke et al., 2016), or replaced by its average firing rate (Nakano et al., 2015). However, in all of these approaches, the energy functionals have been defined with respect to some statistical measure of neural activity, for example spike rates, instead of continuous-valued neuronal variables like the membrane potential. As a result in these models, it is difficult to independently control different neuro-dynamical parameters, for example the shape of the action-potential, bursting activity or adaptation in neural activity, without affecting the network solution.

          In Gangopadhyay and Chakrabartty (2018), we proposed a model of a Growth Transform (GT) neuron which reconciled the bottom-up and top-down approaches such that the dynamical and spiking responses were derived directly from a network objective or an energy functional. Each neuron in the network implements an asynchronous mapping based on polynomial Growth Transforms, which is a fixed-point algorithm for optimizing polynomial functions under linear and/or bound constraints (Baum and Sell, 1968 Gangopadhyay et al., 2017). It was shown in Gangopadhyay and Chakrabartty (2018) that a network of GT neurons can solve binary classification tasks while producing stable and unique neural dynamics (for example, noise-shaping, spiking and bursting) that could be interpreted using a classification margin. However, in the previous formulation, all of these neuro-dynamical properties were directly encoded into the network energy function. As a result, the formulation did not allow independent control and optimization of different neuro-dynamics. In this paper, we address these limitations by proposing a novel GT spiking neuron and population model, along with a neuromorphic framework, according to the following steps:

          • We first remap the synaptic interactions in a standard spiking neural network in a manner that the solution (steady-state attractor) could be encoded as a first-order condition of an optimization problem. We show that this network objective function or energy functional can be interpreted as the total extrinsic power required by the remapped network to operate, and hence a metric to be minimized.

          • We then introduce a dynamical system model based on Growth Transforms that evolves the network toward this steady-state attractor under the specified constraints. The use of Growth Transforms ensures that the neuronal states (membrane potentials) involved in the optimization are always bounded and that each step in the evolution is guaranteed to reduce the network energy.

          • We then show how gradient discontinuity in the network energy functional can be used to modulate the shape of the action potential while maintaining the local convexity and the location of the steady-state attractor.

          • Finally, we use the properties of Growth Transforms to generalize the model to a continuous-time dynamical system. The formulation will then allow for modulating the spiking and the population dynamics across the network without affecting network convergence toward the steady-state attractor.

          We show that the proposed framework can be used to implement a network of coupled neurons that can exhibit memory, global adaptation, and other interesting population dynamics under different initial conditions and based on different network states. We also illustrate how decoupling transient spiking dynamics from the network solution and spike-shapes could be beneficial by using the model to design a spiking associative memory network that can recall a large number of patterns with high accuracy while using fewer spikes than traditional associative memory networks. This paper is also accompanied by a publicly available software implementing the proposed model (Mehta et al., 2019) using MATLAB © . Users can experiment with different inputs and network parameters to explore and create other unique dynamics than what has been reported in this paper. In the future, we envision that the model could be extended to incorporate spike-based learning within an energy-minimization framework similar to the framework used in traditional machine learning models (LeCun et al., 2006). This could be instrumental in bridging the gap between neuromorphic algorithms and traditional energy-based machine learning models.


          Tonic firing typically occurs without presynaptic input and can be viewed at as background activity. Tonic activity is often characterized by a steady action potential firing at a constant frequency. Note that not all neurons may have tonic activity at rest. It may serve as keeping a steady background level of a certain neurotransmitter or it can serve as a mechanism where both an inhibition or increase in presynaptic input can be transmitted. When a neuron is silent at rest, only an increase in presynaptic activity can be transmitted postsynaptically.

          In contrast, phasic firing occurs after a neuron is activated due to presynaptic activity and it incurs activity on top of any background activity a neuron may have. It is typically restricted to one, a few, or a short burst of action potentials, whereafter the activity quickly returns to the resting state.


          Results

          Exposure to chronic cold induced a long-lasting decrease in DA neuron population activity

          Rats were randomly assigned to either control or chronic cold groups, and the effect of stress on DA neuron activity was assessed. In control rats, the number of spontaneously active DA neurons encountered per electrode track was 1.1 ± 0.1 (n= 9 rats, 85 DA neurons), which is consistent with our previous studies (Valenti & Grace, 2010 Valenti et al., 2011). Two-week exposure to cold significantly reduced population activity by 46% to 0.5 ± 0.03 neurons/track (n= 6 rats, 27 DA neurons CTRL vs CCE: one-way ANOVA, F(2,20)= 16.9, P< 0.001 Figure 1A ), which is consistent with our previous report (Moore et al., 2001). To investigate whether chronic cold induced persistent changes in VTA DA neuron activity, following cold exposure a subgroup of CCE rats was housed in the ambient-temperature colony for an extra 7 days. In these rats, the number of spontaneously active DA neurons remained significantly reduced when compared to controls (15.1% below controls CTRL vs 7d post CCE: 0.8 ± 0.1, n= 8 rats, 57 DA neurons one-way ANOVA, F(2,20)= 16.9, P< 0.001 Figure 1A ), nevertheless the population activity was significantly higher than observed in CCE rats tested 18� hours after removal from cold (CCE vs 7d post CCE: one-way ANOVA, F(2,20)= 16.9, P< 0.001 Figure 1A ). We reported previously that different stressors can exert differential effects on DA neuron population activity depending on the relative location of the neurons within the VTA (Valenti et al., 2011). Thus, we further examined whether the effects of CCE were dependent on the location of the DA neurons across the medial-lateral extent of the VTA. We found that the CCE-induced decrease in population activity occurred primarily in the DA neurons located either in the medial (CTRL= 1.3 ± 0.2, n= 34 DA neurons CCE= 0.7 ± 0.1, n= 12 DA neurons one-way ANOVA, F(2,20)= 3.82, P= 0.039 Figure 1B ) or in the central (CTRL= 1.1 ± 0.1, n= 29 DA neurons CCE= 0.4 ± 0.1, n= 8 DA neurons one-way ANOVA, F(2,20)= 4.51, P= 0.024 Figure 1B ) part of the VTA for rats tested on the day after cold removal. However, no difference was observed in the lateral part of the VTA (one-way ANOVA, P= 0.939 Figure 1B ). In contrast, DA neurons recorded from rats tested 7 days after CCE did not exhibited differences in activity in any of the 3 locations examined, when either compared to control or to CCE tested on the following day (CTRL vs 7d post CCE, one-way ANOVA: medial, P= 0.20 central, P= 0.953 lateral, P= 0.939 CCE vs 7d post CCE, one-way ANOVA: medial, P= 1.00 central, P= 0.180 lateral, P= 0.939 Figure 1B ).

          A) Exposure to cold stress for 14� days induced a pronounced reduction in the number of spontaneously active VTA DA neurons (population activity black bar) compared to controls (white bar). This attenuation in population activity persisted when examined 7 days following removal from cold exposure (7d post CCE, hatched bar) (* one-way ANOVA, P< 0.05 see text for details). B) Chronic cold selectively decreased the number of spontaneously active DA neurons located in the medial (M) and central (C) but not in lateral (L) VTA of rats tested at 18� hours after chronic cold exposure (black squares) compared to controls (white circles) (* CTRL vs CCE: one-way ANOVA, medial, P= 0.039 central, P= 0.024). In rats tested 7 days post CCE there was no significant change in the DA neuron population located in any subdivision of the VTA (black diamonds). No significant changes were observed either in average firing rate (C) or in average percent of burst firing (D).

          Chronic exposure to cold did not significantly affect DA neuron average firing rate (CTRL: 4.0 ± 0.3Hz, n= 9 rats CCE: 3.3 ± 0.6Hz, n= 6 rats one-way ANOVA, P= 0.2828 Figure 1C ) or burst firing (CTRL: 23.7 ± 3.2%, n= 9 rats CCE: 23.3 ± 2.9%, n= 6 rats one-way ANOVA, P= 0.2935 Figure 1D ) at either post-exposure time point. Moreover, there was no difference in the distribution of percent spikes in bursts for neurons recorded in either exposure group when compared to control (18� hours: two-sample Kolmogorov-Smirnov test, P= 0.5044 7 days: two-sample Kolmogorov-Smirnov test, P= 0.3875).

          Thus, these data suggest that a maintained inescapable mild stressor, i.e. a 2 week chronic exposure to cold, produced prominent changes in VTA DA neuron population activity, which were still present in an attenuated fashion a week after cold removal. Moreover, similar to the effects observed with repetitive footshock (Valenti et al., 2011), chronic cold stress affected VTA DA neuron spontaneous activity differentially depending on the location within the VTA.

          Chronic cold exposure did not alter mPFC pyramidal neuron activity

          The PFC is a region known to regulate stress responses subcortically (Abercrombie et al., 1989 Finlay et al., 1995 Cabib & Puglisi-Allegra, 1996), and recent studies suggested that mPFC attenuates stress responses following a controllable stressor (Amat et al., 2008). Thus, the effects of CCE on mPFC pyramidal neurons were examined. In control rats, putative pyramidal neurons exhibited an average firing rate of 1.1 ± 0.2Hz (n= 7 rats, 34 neurons). Two weeks of exposure to cold did not significantly alter pyramidal neuron firing rate (CCE: 0.9 ± 0.1Hz, n= 7 rats, 26 neurons Kruskal-Wallis one-way ANOVA on Ranks, P= 0.704). In addition, chronic cold stress did not change mPFC pyramidal neuron percent burst firing (CTRL: 41.5 ± 3.0%, CCE: 40.0 ± 4.4% one-way ANOVA, P= 0.769) or the average interspike interval (CTRL: 16.6 ± 1 msec, CCE: 16.6 ± 1 msec Kruskal-Wallis one-way ANOVA on Ranks, P= 0.820). Thus, cold stress did not affect any of the parameters of mPFC pyramidal neuron activity measured.

          Previous exposure to chronic cold prevented the acute restraint-induced increase in VTA DA neuron population activity

          We have shown that restraint stress, given either acutely or repeatedly, increased DA neuron population activity (Valenti et al., 2011). Thus, the effect of restraint on DA neuron activity of untreated rats was opposite in direction from that found following chronic cold ( Figure 1 ). Therefore, we examined whether the previous exposure to chronic cold affected the restraint-induced increase in VTA DA neuron population activity or restraint-induced amphetamine cross-sensitization of locomotor activity. A two-way ANOVA was applied to examine the effects of CCE, AR and the interaction between CCE and AR, followed by Holm-Sidak method for multiple comparisons (source of variation: CCE, F(1,26)= 99.3, P< 0.001 AR, F(1,26)= 62.9, P< 0.001 CCE x AR, F(1,1,26)= 7.01, P= 0.0136). Thus, an acute restraint stress session induced a pronounced activation of VTA DA neuron population activity (CTRL vs AR: 1.9 ± 0.1, n= 8 rats, 135 DA neurons two-way ANOVA, F(1,26)= 99.3, P< 0.001 Figure 2A ), as previously reported (Valenti et al., 2011). Acute restraint stress also increased DA neuron population activity in rats pre-exposed to chronic cold compared to CCE alone, restoring the CCE-induced decrease in population activity toward control levels (CCE vs CCE + AR: 0.9 ± 0.1, n= 7 rats, n= 54 DA neurons two-way ANOVA, F(1,26)= 99.3, P< 0.001 Figure 2A ). However, the restraint-induced increase in DA neuron population activity observed in CCE rats was by far less pronounced than in AR alone (AR vs CCE + AR: two-way ANOVA, F(1,26)= 62.9, P< 0.001 Figure 2A ). In addition, there was a significant interaction among groups (CCE x AR: two-way ANOVA, F(1,1,26)= 7.01, P= 0.0136). Thus, pre-exposure to chronic cold appeared to protect the DA system from the effects of acute restraint stress.

          In examining the location of VTA DA neuron population change within the VTA, there were marked differences observed following the stress protocol. Thus, acute restraint increased DA neuron firing across medial, central, and lateral portions of the VTA (CTRL vs AR, two-way ANOVA: medial, F(1,21)= 4.5, P= 0.045 central, F(1,21)= 50.4, P< 0.001 lateral, F(1,21)= 8.1, P= 0.012 Figure 2B ), whereas chronic cold decreased population activity primarily in the medial and central VTA ( Figure 1B and ​ and2B). 2B ). In CCE animals subsequently exposed to acute restraint, the increase in medial VTA DA neuron firing remained (AR: 1.9 ± 0.2, n= 8 rats, 45 DA neurons CCE + AR: 1.6 ± 0.1, n= 7 rats, 33 DA neurons two-way ANOVA, P= 0.258), which corresponded to a 135.1% increase from CCE alone (CCE vs CCE + AR two-way ANOVA, F(1,21)= 13.4, P= 0.0014 Figure 2B ). In contrast, previous exposure to chronic cold prevented the prominent increase induced by restraint stress in central (AR: 1.8 ± 0.1, n= 8 rats, 44 DA neurons CCE + AR: 0.5 ± 0.1, n= 7 rats, 11 DA neurons two-way ANOVA, F(1,21)= 50.4, P< 0.001) and lateral VTA (AR: 1.9 ± 0.3, n= 7 rats, 40 DA neurons CCE + AR: 0.7 ± 0.1, n= 5 rats, 10 DA neurons two-way ANOVA, F(1,15)= 8.15, P= 0.012) with population activity being not significantly different from that observed following CCE alone (CCE vs CCE + AR: two-way ANOVA, central, P= 0.658 lateral, P= 0.659 Figure 2B ). In addition, a significant interaction was observed between the effects of cold and restraint stress only for DA neurons located in central VTA (two-way ANOVA, F(1,1,21)= 8.24, P= 0.0092). Therefore, CCE attenuated the ability of restraint stress to increase VTA neuron activity in regions of the VTA that project to more associative regions of the striatum, without affecting the increase in the reward-related medial VTA regions (Ikemoto, 2007 Lodge & Grace, 2011 Valenti et al., 2011).

          No significant change in either average firing rate (for CTRL and CCE see above AR: 4.3 ± 0.2Hz, n= 8 rats CCE + AR: 3.8 ± 0.1Hz, n= 7 rats two-way ANOVA, source of variation, CCE: P= 0.0841 AR, P= 0.2742) or average percent burst firing (for CTRL and CCE see above AR: 35.3 ± 4.1%, n= 7 rats CCE + AR: 22.8 ± 4.4%, n= 7 rats two-way ANOVA, source of variation, CCE: P= 0.0995 AR: P= 0.1526 Figure 2C ) was observed in any of the groups tested or when the interaction of the effect of the 2 stress protocols was examined (FR: two-way ANOVA, P= 0.8404 %B: two-way ANOVA, P= 0.1216). Given that acute restraint was shown to increase the average percent of burst firing in control rats (Valenti et al., 2011), we examined whether pre-exposure to chronic cold also prevented the restraint-induced increase in burst firing observed in untreated rats ( Figure 2C ). Further analysis revealed that pre-exposure to cold stress altered the distribution in percent burst firing, with many more neurons showing low levels of burst discharge following chronic cold + acute restraint (AR vs CCE + AR two-sample Kolmogorov-Smirnov test, P= 0.0424 Figure 2D ).

          Effects of chronic exposure to cold on amphetamine-induced locomotor activity

          Previous studies from our laboratory suggest that the increased level of VTA DA neuron population activity correlates with the increased locomotor response to amphetamine (Lodge & Grace, 2008 Valenti et al., 2011). Given that CCE induced a pronounced reduction of DA neuron population activity, and that CCE attenuates the electrophysiological response to restraint stress, this relationship was examined behaviorally. Thus, both spontaneous and amphetamine-induced locomotor activity was recorded in 4 groups of rats: control, chronic cold rats, restraint rats, chronic cold + restraint rats. Baseline locomotor activity was recorded for 30 min and measured in separated open-field arenas (Coulborne Instruments). Rats were then removed from the arenas and injected i.p. with 0.5 mg/kg amphetamine. A three-way ANOVA was applied to analyze the effects of CCE, AR, time or their interactions, and all groups were compared (source of variation: CCE: F(1,864)= 4.275, P= 0.039 AR: F(1,864)= 1.485, P= 0.223 time: F(23,864)= 12.962, P< 0.001 for interactions: CCE x AR: F(1,1,864)= 0.905, P= 0.342 CCE x time, F(1,23,864)= 0.47, P= 0.985 AR x time, F(1,23,864)= 3.137, P< 0.001 CCE x AR x time, F(1,1,23,864)= 0.426, P= 0.992). In addition, a pairwise multiple comparison procedure (Holm-Sidak method) was applied following the ANOVA to compare these factors. Thus, administration of amphetamine to CCE rats induced a rapid and transient increase in locomotor activation compared to matched controls for the first 5 min from drug administration (CTRL, n= 8 rats vs CCE, n= 9 rats three-way ANOVA, P= 0.040 Figure 3 ) however, locomotor activity of CCE rats was slightly but not significantly lower than that of controls in the subsequent time points. Consistent with our previous study (Valenti et al., 2011), 0.5 mg/kg amphetamine induced a pronounced increase in locomotor activity of AR rats compared to controls during the first 15 min post-drug (CTRL, n= 8 rats vs AR, n= 12 rats three-way ANOVA, 35min: P< 0.001 40min: P= 0.042 45min: P= 0.003 50min: P= 0.026). In contrast, in rats that received acute restraint on the day after cold removal, chronic cold stress failed to affect significantly the acute restraint-induced increase in the locomotor response to amphetamine (AR, n=12 rats vs CCE+AR n= 11 rats three-way ANOVA, P= 0.374 Figure 3B ).

          Thus, the decrease in DA neuron population activity observed in the CCE rats was not found to correlate with a decrease in the amplitude of amphetamine-induced locomotion. Moreover, 2 weeks continuous exposure to cold attenuated the restraint stress-induced increase in DA neuron population activity but did not affect restraint-induced behavioral activation to amphetamine.


          Neuronal Firing Rate As Code Length: a Hypothesis

          Many theories assume that a sensory neuron’s higher firing rate indicates a greater probability of its preferred stimulus. However, this contradicts (1) the adaptation phenomena where prolonged exposure to, and thus increased probability of, a stimulus reduces the firing rates of cells tuned to the stimulus and (2) the observation that unexpected (low probability) stimuli capture attention and increase neuronal firing. Other theories posit that the brain builds predictive/efficient codes for reconstructing sensory inputs. However, they cannot explain that the brain preserves some information while discarding other. We propose that in sensory areas, projection neurons’ firing rates are proportional to optimal code length (i.e., negative log estimated probability), and their spike patterns are the code, for useful features in inputs. This hypothesis explains adaptation-induced changes of V1 orientation tuning curves and bottom-up attention. We discuss how the modern minimum-description-length (MDL) principle may help understand neural codes. Because regularity extraction is relative to a model class (defined by cells) via its optimal universal code (OUC), MDL matches the brain’s purposeful, hierarchical processing without input reconstruction. Such processing enables input compression/understanding even when model classes do not contain true models. Top-down attention modifies lower-level OUCs via feedback connections to enhance transmission of behaviorally relevant information. Although OUCs concern lossless data compression, we suggest possible extensions to lossy, prefix-free neural codes for prompt, online processing of most important aspects of stimuli while minimizing behaviorally relevant distortion. Finally, we discuss how neural networks might learn MDL’s normalized maximum likelihood (NML) distributions from input data.

          This is a preview of subscription content, access via your institution.


          Watch the video: AdEx: Adaptive exponential integrate-and-fire (August 2022).