MarketAlert – Real-Time Market & Crypto News, Analysis & AlertsMarketAlert – Real-Time Market & Crypto News, Analysis & Alerts
Font ResizerAa
  • Crypto News
    • Altcoins
    • Bitcoin
    • Blockchain
    • DeFi
    • Ethereum
    • NFTs
    • Press Releases
    • Latest News
  • Blockchain Technology
    • Blockchain Developments
    • Blockchain Security
    • Layer 2 Solutions
    • Smart Contracts
  • Interviews
    • Crypto Investor Interviews
    • Developer Interviews
    • Founder Interviews
    • Industry Leader Insights
  • Regulations & Policies
    • Country-Specific Regulations
    • Crypto Taxation
    • Global Regulations
    • Government Policies
  • Learn
    • Crypto for Beginners
    • DeFi Guides
    • NFT Guides
    • Staking Guides
    • Trading Strategies
  • Research & Analysis
    • Blockchain Research
    • Coin Research
    • DeFi Research
    • Market Analysis
    • Regulation Reports
Reading: Multimodal dataset linking wide‐field calcium imaging to behavior changes in operant lever‐pull task in mice – Scientific Data
Share
Font ResizerAa
MarketAlert – Real-Time Market & Crypto News, Analysis & AlertsMarketAlert – Real-Time Market & Crypto News, Analysis & Alerts
Search
  • Crypto News
    • Altcoins
    • Bitcoin
    • Blockchain
    • DeFi
    • Ethereum
    • NFTs
    • Press Releases
    • Latest News
  • Blockchain Technology
    • Blockchain Developments
    • Blockchain Security
    • Layer 2 Solutions
    • Smart Contracts
  • Interviews
    • Crypto Investor Interviews
    • Developer Interviews
    • Founder Interviews
    • Industry Leader Insights
  • Regulations & Policies
    • Country-Specific Regulations
    • Crypto Taxation
    • Global Regulations
    • Government Policies
  • Learn
    • Crypto for Beginners
    • DeFi Guides
    • NFT Guides
    • Staking Guides
    • Trading Strategies
  • Research & Analysis
    • Blockchain Research
    • Coin Research
    • DeFi Research
    • Market Analysis
    • Regulation Reports
Have an existing account? Sign In
Follow US
© Market Alert News. All Rights Reserved.
  • bitcoinBitcoin(BTC)$78,833.004.08%
  • ethereumEthereum(ETH)$2,394.533.21%
  • tetherTether(USDT)$1.000.00%
  • rippleXRP(XRP)$1.441.14%
  • binancecoinBNB(BNB)$645.382.25%
  • usd-coinUSDC(USDC)$1.00-0.01%
  • solanaSolana(SOL)$87.642.69%
  • tronTRON(TRX)$0.328995-1.25%
  • Figure HelocFigure Heloc(FIGR_HELOC)$1.02-1.05%
  • dogecoinDogecoin(DOGE)$0.0968922.43%
Learn

Multimodal dataset linking wide‐field calcium imaging to behavior changes in operant lever‐pull task in mice – Scientific Data

Last updated: July 29, 2025 3:00 pm
Published: 9 months ago
Share

This resource aims to advance open neuroscience research by enabling investigations into the complex relationships between cortical activity, motor behavior, and environmental factors during learning26. The multimodal nature of our dataset supports diverse analytical approaches, from traditional behavioral metrics to advanced neural circuit analyses. Through open distribution of this comprehensive dataset, we seek to promote transparent, reproducible research and facilitate novel insights into the neural mechanisms underlying behavior changes.

Transgenic mice for wide-field imaging were obtained by crossing VGluT1-Cre mice (B6.Cg-Slc17a7tm1.1(cre)Hze/J, strain #: 037512, Jackson Laboratory) and Ai162 mice (B6.Cg-Igs7tm162.1(tetO-GCaMP6s,CAG-tTA2)Hze/J, strain #: 031562, Jackson Laboratory). All mice were allowed access to food and water ad libitum and were housed in a 12:12 h light-dark cycle (light cycle: 8 AM-8 PM). Both males (n = 11) and females (n = 14) were used for the experiments and were aged 17-31 weeks at the end of the experiments. For at least 3 days before surgery, mice were supplied and habituated with a carprofen-containing sweetened gel (3 g/day, MediGel Hazelnut or DietGel Boost, ClearHO, ME, USA; 0.48 mg/g in gel, Remadile, Zoetis, NJ, USA), which was occasionally provided after surgery to decrease post-operative pain. All animal experiments were approved by the Institutional Animal Care and Use Committee of the University of Tokyo, Japan.

Surgical procedures were performed according to our previous report (Kondo & Matsuzaki, 2021) with a small modification. Mice were anesthetized by intraperitoneal or intramuscular injection of a mixture of ketamine (74 mg/kg; Daiichi Sankyo Propharma, Tokyo, Japan) and xylazine (10 mg/kg; Beyer Pharma Japan, Osaka, Japan). After anesthesia, an eye ointment (0.3% w/v ofloxacin; Tarivid, Santen Pharmaceutical, Osaka, Japan) was used to prevent eye-drying and infection. During surgery, body temperature was maintained at 36-37 °C with a heating pad. The head of the mouse was sterilized with 70% ethanol, the hair was shaved, and the scalp was incised. After the skull was exposed, the soft tissue on the skull was removed. The temporal muscle was carefully detached from the cranium to improve the observation of temporal cortical areas. A custom head-plate (Misumi, Tokyo, Japan) was attached to the skull using dental resin cement (Estecem II, Tokuyama Dental, Tokyo, Japan). To prevent excitation light from directly entering through gaps between the head-plate and eyes of the mouse, the gaps were filled with a mixture of dental resin cement (Fuji lute BC; GC, Tokyo, Japan) and carbon powder (FUJIFILM Wako Pure Chemical, Osaka, Japan). To prevent drying of the skull surface, a thin layer of cyanoacrylate adhesive (Vetbond; 3 M, MN, USA) and dental resin cement (Super bond; Sun Medical, Shiga, Japan) were applied. After the curing of the dental resin layer, UV-curing optical adhesive (NOA81, Norland Products, NJ, USA) was applied several times. This enabled us to observe cortical activity longitudinally through an intact cranium. The optical adhesive-cured skull was covered with a silicone elastomer (Dent-silicone V, Shohu, Kyoto, Japan) to protect it from dust. An isotonic saline solution with 5% (w/v) glucose and the anti-inflammatory analgesic carprofen (5 mg/kg, Remadile) was injected intraperitoneally after all surgical procedures. The water control schedule was started after at least 3 days of recovery.

All behavioral tasks and imaging were performed in darkness inside a dedicated sound-attenuating task box equipped with a lever unit, licking recorder, sound presentation system, and water reward feeder (O’hara & Co., Tokyo, Japan). We conducted all behavioral tasks and recorded analog voltages from the behavioral sensors with Labview (ver 2021, National Instruments, TX, USA). The atmosphere of the task box was monitored during training; temperature, humidity, atmospheric air pressure, and CO concentration were measured with electronic sensors (BME680, BOSHE Sensortec, Reutlingen, Germany; MH-Z19C, Zhengzhou Winsen Electronics Technology Co., Zhengzhou, China), which were connected to a microcontroller (Seeed XIAO RP2040, Seeed Technology Co., Shenzhen, China) by the IC interface. Outputs of the atmospheric sensors were recorded on a PC at 20 Hz via a USB serial connection. The head-fixed mouse was placed on the body chamber. A lever and immobile pawrest were set in front of the right and left forelimbs, respectively. The lever was movable but stabilized with a pair of permanent magnets at a base position. A force of 0.04 N was required for lever-pull initiation. The lever could be pulled up to a distance of 4 mm, where it was physically blocked, and then could return to the base position without being pulled owing to the magnetic force. The position of the lever was recorded using a rotary encoder (MES-12-2000P, Microtech Laboratory, Kanagawa, Japan) equipped 80 mm away from the lever tip. The pulse outputs of the rotary encoder were counted with an NI-DAQ (USB-6229 or PCIe-6321, National Instruments, TX, USA), converted into the arc length, which was recorded with other analog data.

A lick spout was placed in front of the mouse, aligned with its mouth. Licking behavior was monitored by electrically detecting the contact between the tongue and lick spout, with the signals conveyed to an analog input of the NI-DAQ. All analog voltage inputs, frame synchronization pulses, and values from the environmental sensor were recorded at a sampling rate of 5 kHz. Sound cues (10 kHz sinusoidal tone, 70 dB sound pressure level, SPL) were presented through a speaker (FT28D, Fostex, Tokyo, Japan) on the left side, 25 cm from the animal. The water reward (4 μL/drop) was fed with a micro pump unit and released from the lick spout mentioned above. Body movement was recorded with a load cell with a rated capacity of 100 g. The displacement detected by the load cell was processed with an instrumental amplifier (LT1167, Analog Devices, MA, USA) into an analog voltage and transmitted to an analog input of the NI-DAQ. This analog voltage was not calibrated with an actual weight, and thus the unit was arbitrary. Visual stimuli were emitted from the tip of a φ5 mm stainless steel tube, attached with a red LED on its opposite side, and vibro-tactile stimuli were generated as a vibration (linear vibration actuator, LD14-002, Nidec, Kyoto, Japan) or an air puff (0.1 MPa) on the whisker pad. The transistor-transistor-logic (TTL) signals controlling reward delivery or sensory stimuli were recorded as an analog voltage with NI-DAQ.

Our task schedule spanned approximately 1 month. First, to facilitate the acclimatization of mice to the environment, we conducted 3-8 days of pre-training (see below). On the last pre-training day, we conducted the first resting-state recording session after the pre-training (recording day 0). On the following days, we conducted one task training session per day, with occasional training breaks of 2-3 days without any handling (e.g., over weekends; 3.1 ± 0.6 (SD) breaks throughout training; 2.3 ± 0.7 days/break; n = 25 mice). Resting-state recording sessions were also conducted after task training sessions on recording days 1, 7, and 15. Of note, for one animal, the mid-training resting-state session was performed on recording day 8 instead of day 7 (Table 1). On recording day 16, i.e., 1-6 days after the 15 recording day, the sensory-mapping session was held. The entire recording period (recording days 1-16) spanned 24.3 ± 3.2 calendar days.

The lever-pull task was performed for 30 min per session each day. A lever pull was defined as the epoch during which the mouse continuously kept the lever pulled for >1 mm from its base position (the maximum pull limit was 4 mm). A trial started with presentation of the sound cue (10 kHz pure tone, 70 dB SPL, 200 ms). A trial was considered successful if the mouse pulled the lever for longer than a defined duration of time (T) within 1 s from the sound cue onset. For each success, a water reward (4 μL) was automatically delivered from the lick spout.

We ensured that the mouse could try pulling the lever only once during the 1 s cued period, by monitoring the lever position online. Even before the 1 s had passed for the cued period, once the lever was put into the pulled state, and returned to the non-pulled state before the required duration T had elapsed, we considered the trial to be a failure, and aborted the trial. Conversely, if the mouse did not put the lever into the pulled state at all during the cued period, we considered the trial to be a miss.

The inter-trial interval between 3-4 s was randomly set after each trial, and if a mouse pulled the lever during the inter-trial interval, the interval was extended by another 3-4 s until the next sound cue. The required lever-pull time, T, was adjusted according to the behavioral performance. The initial T in the first session was set to 1 ms. It was extended by 50 ms when there was an 80% success rate in the previous 20 trials, up to the maximum duration of 400 ms. We set a refractory period of 20 trials for these extensions, ensuring that each extension occurred at least 20 trials apart. We defined the T of a session as the T at the end of the session, and the initial T in the next session was set to T of the previous session minus 100 ms. T was never shortened within the same session.

On recording days 1, 7, and 15, resting-state recording was conducted after sufficient intake of water after finishing the lever-pull task. Of note, for one animal, the mid-training resting-state session was performed on recording day 8 instead of day 7 (Table 1). The recording duration was 10 min. On day 0, the resting-state recording was performed after the pre-training.

After 15 lever-pull task sessions, mice were allowed to drink water freely in their home cage. After 1-6 days, a sensory-mapping session was performed. The recording time was 15 min under anesthesia with a mixture of fentanyl (0.05 mg/kg; Daiichi Sankyo Co., Tokyo, Japan), midazolam (5.0 mg/kg; Sandoz, Tokyo, Japan), and medetomidine (0.5 mg/kg; Zenoaq, Fukushima, Japan). Mice were stimulated visually with a red LED (1 s duration, 0.5 duty cycle, 10 Hz), tactilely with vibration on the right whisker pad (~150 Hz; occasionally, with air puff to the whisker pad), and auditorily with white noise (1 s, ~70 dB SPL) from the speaker. These stimuli were presented in a fixed order (visual, tactile, auditory), and the inter-stimulus intervals were randomly set to 10-15 s in each trial. After the session, animals were recovered with antagonistic drugs: flumazenil (0.5 mg/kg; Nippon Chemiphar, Tokyo, Japan), atipamezole (2.5 mg/kg; Zenoaq, Fukushima, Japan), and naloxone (1.2 mg/kg; Alfresa Pharma, Osaka, Japan).

At least 2 days before starting training sessions, the water in the home cage was replaced with water containing 2% citric acid. This replacement mildly suppressed the daily water intake of mice. During the period of daily training, mice typically obtained the necessary water (>1 mL) in training sessions, and they had unrestricted access to standard pellets in their home cages. When a training break lasted ≥2 days (e.g., over weekends), water containing 2% citric acid was placed in the home cage. These bottles were removed the morning of the next training day.

For acclimatization to the head-fixed condition in the task box, mice were pre-trained. In the pre-training, mice were head-fixed and received water rewards in a similar manner as in the lever-pull task, except they obtained the reward by licking the lick spout rather than pulling the lever. Pre-training was conducted 3-8 times, and the numbers of pre-training sessions were included in the metadata. We noticed that some mice gradually lost weight with time even if the mice received daily water intake in the behavioral session. Therefore, to maintain the body weights of mice at approximately >80% of the body weight before water restriction, we provided mice with additional water and high-caloric food (Calorie-Mate fruit flavor, Otsuka Pharmaceutical Co., Tokyo, Japan) when necessary after the behavioral session.

For wide-field one-photon calcium imaging, a wide-field tandem-lens macroscope (THT mesoscope, Brain Vision, Tokyo, Japan), equipped with an objective lens (PLAN APO 1×, #10450028, Leica Microsystems, Wetzlar, Germany) and imaging lens (F2.0, focal length 135 mm, Samyang, Seoul, Republic of Korea), was used. Images were acquired with a CMOS camera (ORCA-Fusion BT, C14440-20UP, Hamamatsu Photonics, Shizuoka, Japan) and HCImage software (ver 5.0.2.2, Hamamatsu Photonics). Single images consisting of 588 × 588 pixels were captured at 60 Hz. We alternately illuminated two excitation LEDs with different wavelengths (405 nm and 470 nm; M405LP1 and M470L5, Thorlabs, NJ, USA) and used band-pass emission filters (FBH405-10 for 405 nm and FBH470-10 for 470 nm, Thorlabs). These lights were combined with a dichroic mirror (DMLP425R, Thorlabs) and delivered to the macroscope through a liquid light guide (Ø5 mm Core) and collimator (LLG5-6H and COP1-A, Thorlabs). The collimated light was passed through the 3D-printed field stop (the geometry was designed for the inner space of the head-plate) and the condenser lens (plano convex lens, f = 150 mm, LA1417, Thorlabs), and a dichroic beam splitter (FF484-FDi01, Semrock; IDEX Health & Science, NY, USA) then projected to the sample specimen. For calcium imaging, a 3D-printed light shield was placed on the head-plate to prevent direct illumination of excitation light to the eyes of the mouse. The total power of the excitation lights (blue and violet) was set at ~10 mW, and fluorescent signal degradation across imaging sessions was not observed. Fluorescent emission signal from the sample was collected with the objective and passed through the dichroic band-pass filter (FF01-536/40, Semrock; IDEX Health & Science) and imaging lens, and then projected to the CMOS camera. Two sequential images obtained by two different excitation lights are analyzed as a pair (see below), and thus 30 Hz is considered the effective sampling rate. LED illumination and image acquisition timing signals were recorded using the same DAQ device (USB-6229 or PCIe-6321) used for the task. We imaged 108000 frames (30 min) in each task session, 36000 frames in each resting-state session (10 min on recording days 1, 7, and 15), and 54000 frames in each sensory-mapping session (recording day 16).

Three machine vision cameras (acA1440-220um, Basler, Ahrensburg, Germany) recorded videos of the upper body, right side of the face, and right eye at 100 Hz. All recording and controlling machine vision cameras were conducted with Pylon viewer (ver 7.2.1.25747, Basler, Ahrensburg, Germany). Pulses (5 V, 100 Hz, duty rate of 0.5) from the DAQ device were used for synchronization of the frame acquisition across cameras and were recorded as an analog voltage. The band-pass IR cut filter (center 850 nm, 25 nm FWHM, Edmund Optics, NJ, USA) was placed in front of the imaging lens of each camera. The videos were saved as MP4 files. LED arrays (OSI3CA5111A, OptoSupply, Hong Kong; consisting of 850 nm 64 LEDs) at the back of each camera were used as light sources. In addition, a 910 nm LED was lit in front of the mouse. The brightness of the 910 nm LED was modulated in each session, so that the pupil diameter would dilate moderately during imaging, without any adjustments during individual sessions.

Behavioral data processing (filtering, event detection, lick-rate calculation, and temporal down-sampling to synchronize calcium imaging data) and imaging data processing (spatial down-sampling, interleaved image separation of two excitation wavelengths, and motion correction) were conducted with MATLAB software suite (2022b; Mathworks, MA, USA) as described in the following subsections. All other data processing was conducted in a Python environment (conda version 24.3.0; Python version 3.10.14) with appropriate software packages (NumPy version 1.24.3; SciPy 1.14.0; scikit-learn version 1.5.1; Pandas version 2.2.2; h5py version 3.11.0; PyNWB version 2.8.1; OpenCV python binding, version 4.10.0). The source codes of all processing workflows are deposited at https://github.com/BraiDyn-BC/bdbc-data-pipeline.

Each task event (onsets of tone cue, reward, lever pulls, and sensory stimuli in the sensory-mapping experiment) was detected by an appropriate threshold. The motion sensor output was processed by applying a 10 Hz cut-off low-pass filter and then subtracting its time average. A 0-1 function indicating the voltage rise time points in the licking sensor was convolved with a 0.5 s exponential kernel to yield the lick rate. Raw data sampled at 5 kHz were embedded into the 30 Hz imaging frames as follows: any sound cue, reward delivery, or sensory stimulus occurring between the end of one frame and the beginning of the next was assigned to the earlier frame during down-sampling. Lever position, lick rate, and environmental sensor values were averaged over the interval from the onset of a frame to the onset of the following frame to generate the values for the preceding frame.

Images were down-sampled into 288 × 288 pixels and divided into two image stacks according to the recorded LED pulse timings. One image stack contained images acquired with blue light excitation (I), and the other contained images acquired with violet light excitation (I). The displacement of each frame of I was estimated with the NoRMCorre-based rigid frame registration using the time-averaged image of I as a reference. The calculated displacement vector was applied to both I and I stacks, and these motion-corrected I and I stacks were treated as raw data in the datasets.

To draw borders of neocortical areas in our imaging data, we first prepared a template frame for each animal, and aligned the Allen common coordinate framework (Allen CCF) with this template frame using an approach based on MesoNet (the ks-mesoscaler, ks-affine2d, ks-affine-aligner, and bdbc-atlas-registration libraries). The animal-by-animal template frames were computed by alignment of the session-average frames with each other; for each session, we first computed the mean of the 470 nm excitation frames over time. We then selected one of these session-mean images as the animal-representative image, and estimated affine transformation matrices for conversion to this representative image from the other session-mean images, based on the keypoints detected using the Oriented FAST and rotated BRIEF (ORB) descriptor of the Python OpenCV library. Using these affine matrices, individual session standard-deviation images over time were warped (using the ks-affine2d library) and then averaged to obtain the animal-by-animal template frames. The landmark-inference DeepLabCut network from MesoNet was used to estimate the nine landmarks on the skull of the animal-by-animal template frames. To estimate the affine transformation from Allen CCF to each template frame, the landmarks being estimated with a likelihood above 0.85 were aligned with those defined on the Allen CCF (provided by the authors of MesoNet; https://github.com/bf777/MesoNet/tree/master/mesonet/atlases/atlas). Finally, based on the two affine matrices (i.e., the atlas to the animal template and the animal template to the session mean), we computed the transformation from Allen CCF to each session-mean image (using the ks-affine2d library). The region of interest (ROI) masks, provided by the MesoNet package, were transformed using this final affine matrix to generate corresponding binary masks of neocortical areas. The signal intensity was averaged across ROIs into a time series of signals.

To enhance the usability of our dataset, we provided ROI signals with hemodynamics correction using a method similar to that conventionally used by others. Fluorescent signals obtained with blue-excitation light (F) mainly contained calcium-dependent signal changes but with small contamination of hemodynamic fluctuations (ratio of oxy- and deoxyhemoglobin and the total volume of blood vessels and arteries). We therefore used fluorescence obtained with violet-excitation light (F) as an instantaneous reference of calcium-independent fluorescence fluctuations, as this wavelength is near the isosbestic point of GCaMP. We thus corrected F based on F using the following procedure. First, ratiometric signals were calculated as ∆F/F = (F – F) / F and ∆F/F = (F – F) / F, with F and F representing the median of F and F over time, respectively. The two time series, ∆F/F and ∆F/F, were then band-pass-filtered to obtain (∆F/F) and (∆F/F), with removed high-frequency acquisition noise and low-frequency baseline drifts. For this, the “filtfilt” function of SciPy (version 1.14.0) was applied in combination with the fifth-order 0.01-10 Hz band-pass butterworth filter, with a frequency consistent with previous studies. Finally, linear regression was performed to predict (∆F/F) based on (∆F/F); this was used to estimate hemodynamics effects: (∆F/F) = A × (∆F/F) + b, with A and b representing the slope and bias term, respectively. The hemodynamics-corrected calcium signal was computed as the residuals of the regression, i.e., (∆F/F) = (∆F/F) – (∆F/F).

We used DeepLabCut version 2.3.10 to estimate the keypoints representing body-part positions in the behavioral videos (Fig. 2; Table 2). The strategy for image extraction from videos for neural-network model preparation is described in Fig. S3, with the number of images used summarized in Table 3 (see also Fig. 6). To improve the generalization of some models, we additionally used some videos from different experiments that were obtained in the same behavioral rig for the initial training iterations. For later training iterations, training and testing of the models were performed in an incremental manner; we verified the performance of the model in tracking a set of videos, before extracting frames from another set of videos. Frames were not extracted from the videos if the tracking performance of the model was considered satisfactory. Prior to each training iteration, an image augmentation step was inserted using the “imgaug” Python package (version 0.4.0) to produce 10-20 augmented images based on each of the annotated images to be used for training and testing. The coordinates were set so that the x-axis increases as points move to the right, and the y-axis increases as points move downward. Keypoint estimation by DeepLabCut was inherently performed even when the body part in question was absent. Therefore, in our published data, we shared not only the raw estimations from DeepLabCut but also the likelihood of the estimation for the potential filtering purposes. To estimate the size and position of the pupil, the set of circumferential points along the pupil boundary was fitted with an ellipse on a frame-by-frame basis. The center position and the length of the major axis of the fitted ellipse were defined as the position and the diameter of the pupil, respectively.

For the resampling of the positional time series from the videography frame rate (100 Hz) to the effective imaging frame rate (30 Hz), we first up-sampled the series to the NI-DAQ sampling rate (5 kHz) and then down-sampled them to 30 Hz. During up-sampling, we dropped the inferred keypoint positions whose likelihood was < 0.2 and plugged the remaining values into the durations of their corresponding video pulses in the 5 kHz NI-DAQ recording. Inter-pulse interpolation was then performed only when the two neighboring pulses contained valid values. For the imaging pulses, two neighboring LED pulses (one violet and one blue) were merged to obtain the 30 Hz pulses. The mean over the duration of each of these pulses was computed to obtain the series down-sampled to the imaging frame rate.

Read more on Nature

This news is powered by Nature Nature

Share this:

  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook

Like this:

Like Loading...

Related

Just Askin’: Nebraska football win counter, volleyball breakout stars, DBK’s biggest challenge
The Fed just leaked a bullish liquidity signal that suggests Bitcoin can front-run a 2026 recovery
Ethereum Whale’s Strategic Aave Move Sparks Market Speculation
This luxury outlet store has anti-ageing products that ‘really work’ for less | Wales Online
Getting names right, and why it matters | Borneo Post Online

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Email Copy Link Print
Previous Article AT UN Summit, Nigeria, Others Demand Collective Action Against Food Insecurity
Next Article Ping An wants to turn China’s demographic crisis into an opportunity to showcase a ‘silver future’
© Market Alert News. All Rights Reserved.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Prove your humanity


Lost your password?

%d