Mapping Ignorance https://mappingignorance.org Cutting edge scientific research Wed, 21 Nov 2018 15:00:45 +0000 en-US hourly 1 How Buddha became a Christian saint https://mappingignorance.org/2018/11/21/how-buddha-became-a-christian-saint/ https://mappingignorance.org/2018/11/21/how-buddha-became-a-christian-saint/#respond Wed, 21 Nov 2018 15:00:45 +0000 https://mappingignorance.org/?p=5682 As I mentioned in passing in my last entry, many, if not most, of the oldest stories about Christian martyrs and saints are nothing […]

The post How Buddha became a Christian saint appeared first on Mapping Ignorance.

]]>

As I mentioned in passing in my last entry, many, if not most, of the oldest stories about Christian martyrs and saints are nothing but legendary fabrications, something that scholars knew perfectly well since at least the time of the Enlightenment, when scientific criteria of historiographic research started to be employed by ecclesiastical historians themselves.

Fabrications started very soon in the history of Christianism: a mere look to the so-called ‘apocryphal’ gospels (those stories of Christ and of his disciples, probably written between the 2nd and the 4th centuries, that were excluded from the canonical Bible) allows to get the impression that, perhaps more than because of the possible heresies contained in them, the fathers of the Church decided not to include them among the ‘inspired’ books just because most of what they reported was so patently absurd (like, for example, the Christ Child giving life to some mud bird figurines just for amusement) that nobody would seriously believe it.

But it seems that the standards of incredulity started to go down once the Church became a dominant political force after the times referred to in my previous entry. From that time on, it seems that each local Christian community felt the need of having their own saints and martyrs, and they started to resort to anything that could be adapted to pre-existing templates about stories of martyrdom, in order to fill their chapels and festivities with appropriate figures to be revered. This does not mean that none of the ‘official’ saints from the times of the Roman Empire or soon afterwards existed: many of then might have been real (from some we even have their writings), though even of those that existed, the miraculous parts of their official stories are clearly nothing but a dramatic fabrication.

These inventions often have an farcical quality, in which independent narrative elements mix in unpredictable and spectacular ways, like in the story of St Veronique, in which a female character by the name of Berenice (a classical Greek name, meaning ‘bearer of victory’: bere nike) mentioned in passing in an apocryphal Passion gospel of the 4th or 5th century, is transformed by way of a false etymology in Veron Icon (‘the true image’, mixing, by the way, Latin and Greek roots in one single term, like in the modern word ‘television’), as the supposed bearer of the linen with which Jesus supposedly cleaned his face while he was taken to the cross, miraculously leaving his image on it… a cloth of which numerous churches claimed to possess the ‘true’ one during and after the Middle Ages.

Other cases consist in the transmutation of some pagan cult, like the case of the Celtic goddess Brigid, which Christian people transformed in St Brigid, one of the patron saints of Ireland, to which the traditional festivities, places of cult, and legends of the ancient goddess were transferred tout court. Less amusing are cases like that of St Catherina of Alexandria, who seems to be the mere reversal of the historical figure of the pagan scientist-philosopher Hypatia, actually killed by Christian mobs, into a legendary studious girl supposedly assassinated by pagan hordes.

But the funniest story of all is that of the saints Barlaam and Josaphat. These two characters appeared in a religious romance probably composed in Syria around the 7th or 8th century, and that quickly became popular in the territory of the Byzantine empire, and later in Western Europe, where it was re-elaborated and copied again and again, even as late as in the 17th century, when the two legendary monks became the protagonists of a playwright by the prominent Spanish writer Lope de Vega. The characters briefly show up as well in Shakespeare’s Merchant of Venice. I personally discovered the story (that actually is far from a secret, but a not too much publicized one) while documenting myself for the entry about the Triumph of Christianism, in Candida Moss’ book The Myth of Persecution, that I literally quote here:

Josaphat was an Indian prince who was converted to Christianity by the hermit Barlaam [that supposedly had been himself a disciple of the apostle St Thomas]. Astrologers had predicted at his birth that he would rule over a great kingdom (…), a prediction that led his father to shut the boy away in seclusion. Despite his father’s best efforts to keep him from the world, Josaphat realized the horror of the human predicament through encounters with a leper, a blind man, and a dying man. His view of the world thrown into jeopardy, he then met Barlaam the hermit, converted, and spent the remainder of his life in quiet contemplation of the divine.

If this story sounds familiar, it should. It’s nothing but a Christianized version of the life of Siddhartha Gautama, the Indian prince who became the Buddha (…) It isn’t just the broad plot details that are similar; minute plot details and even phraseology are identical. Even the name Josaphat is just a corruption of the word Bodisat or Bodhisattva, a title for the Buddha meaning an enlightened person.”

In this case, the ‘fabrication’ of Josaphat (or Joasaph, in other manuscripts) was not deliberate, but the accidental result of human error. The story just grew and spread, moving from one region to another, translated and retranslated into different languages and in countries with different cultures and religions, till a Christian monk took it for a real history about real Christian people. Barlaam and Josaphat were canonized both in the Orthodox and in the Catholic churches, celebrating their festivity in the West on November 27 until after the historical mistake was discovered and the festivity removed from the canonical sanctorale not many decades ago. The name Josaphat became so popular in some regions that it later became the name of a real saint: the Polish 17th-century bishop St Josaphat Kuntsevych, who very surely ignored for all his life that he shared nothing else than the name of the founder of one of the most important world religions competing with Christianity.

References:

Moss, Candida (2013). The Myth of Persecution: How Early Christians Invented a Story of Martyrdom. New York, HarperCollins.

The post How Buddha became a Christian saint appeared first on Mapping Ignorance.

]]>
https://mappingignorance.org/2018/11/21/how-buddha-became-a-christian-saint/feed/ 0
Catechol derivatives from “ideal lignin” https://mappingignorance.org/2018/11/19/catechol-derivatives-from-ideal-lignin/ https://mappingignorance.org/2018/11/19/catechol-derivatives-from-ideal-lignin/#respond Mon, 19 Nov 2018 15:00:03 +0000 https://mappingignorance.org/?p=5676 In the transition from a petrol-based society to a one based on bio-renewable resources, the replacement of aromatic chemicals is one of the most challenging […]

The post Catechol derivatives from “ideal lignin” appeared first on Mapping Ignorance.

]]>
In the transition from a petrol-based society to a one based on bio-renewable resources, the replacement of aromatic chemicals is one of the most challenging issues. Nowadays, around 40% of the bulk chemicals belong to the aromatic category, and a myriad of products are derived from benzene, toluene and xylene. Aromatic molecules are present in plants, but in most cases at low concentrations, which makes their extraction and use in industry unviable. There are a few exceptions, such as the cardanol derived from cashew nut shell oil or the tannins, and these are already used in industry. However, their abundance is not large enough to replace all petrol derived aromatic compounds. There is though a large stock of aromatics in lignin, the polymer that confers support and toughness to plants. But precisely because of its role in nature (support and protection) lignin is recalcitrant: it is difficult to break this polymer into its individual constituents. That is why, currently, the paper industry uses it for fuel instead of converting it to chemicals.

Lignin consists mainly of two phenylpropanoid units: guaiacyl (G) and syringyl (S) (Figure 1), which are linked together as a result of radical coupling. Because of its random mechanism, the polymerization results in different types of linkages between the units and produces a complex and heterogeneous structure. The depolymerization of lignin is based on the cleavage of β-O-4 bond linkage, which accounts for 45 to 85% of all the interunit linkages. However, other linkages remain uncleaved, and moreover, the depolymerization conditions often trigger the repolymerization (condensation) of the cleaved fragments. As a consequence, the depolymerization of lignin results in several products in the best case, but it can be dozens or even hundreds (Figure 1).

Figure 1. Mechanism for lignin condensation under acidic conditions. Credit: Li et al. (2018)

The heterogeneity of lignin is the biggest hurdle for its depolymerization. In order to circumvent this problem, researchers have bioengineered biomass to achieve more homogeneous lignins. For example, a 78% monomer yield was obtained from a modified lignin with 98% S units and around 90% of β-O-4 linkages. But what would be the outcome of an homogenous lignin, an “ideal lignin”? The answer to that question has been recently publishedi based on the studies on the depolymerization of catechyl lignin (C-lignin), an unusual type of lignin found in the seed coats of vanilla. This lignin is a homopolymer of C units, which results in benzodioxane units in the polymer (Figure 2)

Figure 2. Benzodioxane structure, characteristic of C-lignin. Credit: Li et al. (2018)

This special feature of C-lignin, lacking eliminable benzylic hydroxyl groups, makes it resistant to acidic conditions. These are typically used to purify lignin from polysaccharides, but in common lignins, where eliminable benzylic hydroxyl groups are present, the acidic media triggers the repolymerization (Figure 1). The researchers found that the acidic treatment of this lignin led to no obvious structural change in its structure. Moreover, alkaline oxidative methods, commonly used to depolymerize standard lignin, were also ineffective against C-lignin. Again, the stability of the benzodioxane structure was the reason. The authors chose the hydrogenolysis method to depolymerize lignin and compared the results with the cleavage of the model compound D1 (Figure 3, right). In both cases, two main products were observed by gas chromatography with flame ionization detection (GC-FID): Catechylpropanol (M1), catechylpropane (M2) and a minor product M3, a cyclization product from M1 (Figure 3).

Figure 3. GC-FID spectra of hydrogenolysis products from the model compound D1 and from C-lignin (CW) Credit: Li et al. (2018)

When they studied the effect of hydrogenolysis conditions on the monomer yield they were not surprised to see that the catalyst and solvent played a key role in the outcome. Methanol was the best solvent, with ethereal solvents (dioxane, THF) giving a lower yield. Regarding the catalyst Pd/C and Ru/C showed better product selectivity, whereas Pt/C displayed higher reactivity. Interestingly, palladium and ruthenium gave opposite selectivity for the main monomer: Pd/C produced catechylpropanol monomer M1 with 89% selectivity while Ru/C catalyst yielded the catechylpropane monomer M2 with 74% selectivity (Figure 4).

Figure 4. Hydrogenolysis monomer yields from different catalysts and solvent combinations. C-Dimer refers to model compound D1. CW and LBL are C-lignin that purified by different methods. Credit: Li et al. (2018)

The resulting product after lignin depolymerization was analyzed and no residual polymer, neither side reactions or condensation products were detected. Based on these results the researchers could conclude that all the C-lignin was depolymerized to the monomers by the hydrogenolysis depolymerization.

The monomers that can be obtained by the depolymerization of this type of lignin are highly interesting because they have the catechol core. Although compounds M1 and M2 are not currently used in bulk by the chemical industry, catechol is an important commodity chemical, as it serves as the precursor for pesticides and in the fine chemical industry (fragrances, flavors, drugs). For example, vanillin, the aroma of vanilla, is currently synthesized from catechol, which in turn is produced from phenol, and this derives from benzene. If petrol is to be replaced by biobased resources, catechol derivatives could come from C-lignin. Unfortunately, C-lignin is a rather unusual lignin. Authors speculate that this could be circumvented if C-lignin could be produced in energy crops. However, this is a big question mark, as it is not known if plantation trees such as pines or poplars could tolerate the genetic modifications needed to make them produce C-lignin. A more feasible approach could be to use candlenut shells. They are abundant as a by-product of the use of the nuts for biodiesel and have been reported to give catechols upon depolymerization.ii

References:

iY. Li et al. Sci Adv. 2018;4:eaau2968. doi: 10.1126/sciadv.aau2968

iiK. Barta et al. Green Chem. 2014, 16, 191–196. Doi: 10.1039/C3GC41184B

The post Catechol derivatives from “ideal lignin” appeared first on Mapping Ignorance.

]]>
https://mappingignorance.org/2018/11/19/catechol-derivatives-from-ideal-lignin/feed/ 0
MI weekly selection #306 https://mappingignorance.org/2018/11/18/mi-weekly-selection-306/ https://mappingignorance.org/2018/11/18/mi-weekly-selection-306/#respond Sun, 18 Nov 2018 15:00:02 +0000 https://mappingignorance.org/?p=5674 Patient with Parkinson’s gets experimental stem cell-based treatment
Researchers have placed 2.4 million dopamine precursor cells derived from induced pluripotent stem cells into the brain […]

The post MI weekly selection #306 appeared first on Mapping Ignorance.

]]>

Patient with Parkinson’s gets experimental stem cell-based treatment

Researchers have placed 2.4 million dopamine precursor cells derived from induced pluripotent stem cells into the brain of a patient with Parkinson’s disease, the first of seven people to undergo the experimental treatment. “The patient is doing well, and there have been no major adverse reactions so far,” said researcher Jun Takahashi of Kyoto University in Japan, where the precursor cells were developed

Nature

Vaccine to hinder malaria transmission under development

A liposome-based malaria vaccine is being developed that could prevent transmission of the disease by mosquitoes after they bite an infected person who has been inoculated. “[W]e expect that when an uninfected mosquito bites a person infected with the malaria parasite, the blood it sucks up will carry the parasite and the human antibodies that will prevent the parasite from multiplying in the insect’s gut,”.

The Conversation

Re-examination of satellite data shows hidden sub-Antarctic continents

A new examination of data collected by a defunct European Space Agency satellite has revealed lost continents underneath the ice in Antarctica. “In East Antarctica, we see an exciting mosaic of geological features that reveal fundamental similarities and differences between the crust beneath Antarctica and other continents it was joined to until 160 million years ago,” said Fausto Ferraccioli, co-author of the study.

Space.com

Radial velocity used to detect exoplanet orbiting nearby star

An exoplanet has been found orbiting a star six light-years away from the Solar System. The planet, dubbed Barnard’s Star b, is a super-Earth, and astronomers detected it using radial velocity, which tracks wobbles that stem from an orbiting planet’s gravitational pull.

BBC

Link between obesity genes, depression

Depression has been linked to a genetic predisposition for obesity. “This analysis was important in that it suggests a psychosocial effect of higher BMI as well as, or instead of, a physiological effect driven by adverse metabolic health,” the study says.

The Scientist

The post MI weekly selection #306 appeared first on Mapping Ignorance.

]]>
https://mappingignorance.org/2018/11/18/mi-weekly-selection-306/feed/ 0
Nonequilibrium effects in hybrids of electron systems with spontaneously broken symmetries https://mappingignorance.org/2018/11/15/nonequilibrium-effects-in-hybrids-of-electron-systems-with-spontaneously-broken-symmetries/ https://mappingignorance.org/2018/11/15/nonequilibrium-effects-in-hybrids-of-electron-systems-with-spontaneously-broken-symmetries/#respond Thu, 15 Nov 2018 15:00:10 +0000 https://mappingignorance.org/?p=5671 Imagine a military regiment in formation. That we will call symmetry. Now imagine the same regiment when it is dismissed by the commanding officer: at […]

The post Nonequilibrium effects in hybrids of electron systems with spontaneously broken symmetries appeared first on Mapping Ignorance.

]]>
Imagine a military regiment in formation. That we will call symmetry. Now imagine the same regiment when it is dismissed by the commanding officer: at once the soldiers disperse and tend to form domains (groups) or pairs. Hence, we can say that the symmetry is spontaneously broken. Both superconductors and ferromagnets are examples of electron systems with spontaneously broken symmetries and thereby characterized by order parameters. In both cases the commanding officer is temperature.

Texas A&M ROTC Cadet Corps breaking symmetry at command dismissed order. Photo by Frank Scherschel / The LIFE Picture Collection / Getty Images.

Superconductors

At low temperatures, the resistivity of a metal (the inverse of its conductivity) is nearly constant. As the temperature of a material is lowered and as we approach absolute zero the resistivity should approach a constant value. Many metals, known as normal metals, behave in this way.

The behaviour of another class of metals and some other materials is quite different. These metals behave normally as the temperature is decreased, but at some critical temperature (which depends on the properties of the metal), the resistivity drops suddenly to zero. These materials are known as superconductors. The resistivity of a superconductor is not merely very small at temperatures below the critical temperature; it vanishes! Such materials can conduct electric currents even in the absence of an applied voltage, and the conduction occurs with no joule heating losses.

Conspicuously absent from the list of superconductors are the best metallic conductors (Cu, Ag, Au), which suggests that superconductivity is not caused by a good conductor getting better but instead must involve some fundamental change in the material. In fact, superconductivity results from a kind of paradox: ordinary materials can be good conductors if the electrons have a relatively weak interaction with the lattice, but superconductivity results from a strong interaction between the electrons and the lattice.

Consider an electron moving through the lattice. As it moves, it attracts the positive ions and disturbs the lattice, much as a boat moving through water creates a wake. These disturbances propagate as lattice vibrations, which can then interact with another electron. In effect, two electrons interact with one another through the intermediary of the lattice; the electrons move in correlated pairs (called Cooper pairs) that do not lose energy by interacting with the lattice. The order parameter for a conventional superconductor is then the amplitude of the Cooper pairing between electrons in states with opposite spins and momenta.

Ferromagnets

On the other hand, materials can be classified according to how they behave under an applied magnetic field. One of these categories is ferromagnetism. In ferromagnetic substances, within a certain temperature range, there are net atomic magnetic moments, which line up in such a way that magnetization persists after the removal of the applied field .

Below a certain temperature, called the Curie point (or Curie temperature), an increasing magnetic field applied to a ferromagnetic substance will cause increasing magnetization to a high value, called the saturation magnetization. This is because ferromagnetic substances consists of small magnetized regions called domains.

The main defining features of ferromagnets are the broken spin-rotation symmetry into the direction of magnetization and the associated exchange energy h that splits the spin-up and spin-down spectra. This also leads to a strong spin dependence (spin polarization) of the observables related to ferromagnets.

Together

There are two mechanisms that prevent most of the ferromagnetic materials from becoming superconducting. One of them is the orbital effect due to the intrinsic magnetic field in ferromagnets. When this field exceeds a certain critical value, superconductivity is suppressed . The second mechanism is the paramagnetic effect. This is due to the intrinsic exchange field of the ferromagnet that shows up as a splitting of the energy levels of spin-up and spin-down electrons and hence prevents the formation of Cooper pairs.

Now, Sebastian Bergeret (CFM & DIPC) and others summarize 1 what we already know about the regime where this spin-splitting field is present, but not yet too large to kill superconductivity, in a paper published in Reviews of Modern Physics.

The researchers focus on transport and thermal properties of superconducting hybrid structures with a spin-split density of states. Such a splitting can be achieved either by an external magnetic field or, more interestingly, by placing a ferromagnetic insulator adjacent to a superconducting layer.

Several experimental situations are discussed using a theoretical framework based on the quasiclassical formalism, with which both thermodynamical and nonequilibrium properties of such hybrid structures can be accounted for. In order to cover effects beyond quasiclassics, as, for example, strong spin polarization, the researchers combine the quasiclassical equations with effective boundary conditions.

Bergeret et al show that the combination between superconductivity and magnetism requires, on the one hand, a description of additional nonequilibrium modes, spin and spin energy, and, on the other hand, to couple them all. This leads to novel and intriguing phenomena with direct impact on the latest research activities and proposed future technologies based on superconductors and spin-dependent fields.

Author: César Tomé López is a science writer and the editor of Mapping Ignorance.

References

  1. F. Sebastian Bergeret, Mikhail Silaev, Pauli Virtanen, and Tero T. Heikkilä (2018) Colloquium: Nonequilibrium effects in superconductors with a spin-splitting field Reviews of Modern Physics doi: 10.1103/RevModPhys.90.041001

The post Nonequilibrium effects in hybrids of electron systems with spontaneously broken symmetries appeared first on Mapping Ignorance.

]]>
https://mappingignorance.org/2018/11/15/nonequilibrium-effects-in-hybrids-of-electron-systems-with-spontaneously-broken-symmetries/feed/ 0
The convergence of neuroscience and artificial intelligence https://mappingignorance.org/2018/11/14/the-convergence-of-neuroscience-and-artificial-intelligence/ https://mappingignorance.org/2018/11/14/the-convergence-of-neuroscience-and-artificial-intelligence/#respond Wed, 14 Nov 2018 15:00:57 +0000 https://mappingignorance.org/?p=5667 Several researchers in the field of artificial intelligence (AI) are warning about an AI winter, which means that scientists might lose the interest on the […]

The post The convergence of neuroscience and artificial intelligence appeared first on Mapping Ignorance.

]]>
Several researchers in the field of artificial intelligence (AI) are warning about an AI winter, which means that scientists might lose the interest on the discipline, institutions reduce drastically the funding towards its research and lose presence in the public debate. It wouldn’t be the first AI winter though. The last two decades have been a period of almost-unrivalled optimism about this subject. Hardware, big datasets and deep learning have finally created artificial intelligence that wows consumers and funders alike. However, we are still ages from obtaining a general AI or more human-like systems.

Personally, I wouldn’t be so sure about this decrease in the AI discipline research, but I’m convinced that from now on, this area will get along much more with neuroscience and its rules. We can call an AI winter to the period that will be needed in order to make another step forward towards the better understanding of how the human brain learns and gets the information from the environment with the help of these two sciences. This article describes the tendencies of the researchers towards this direction and some practical examples of the neuroscience contribution.

It’s paradoxical the situation now, as in its origins AI was based on neuroscience and psychology. Due to the later development and expansion of each of those subjects, limits became clearer and the interaction got lost.

Neuroscience provides two advantages for AI. First, neuroscience provides a rich source of inspiration for new types of algorithms, independent of mathematical ideas that have largely dominated traditional approaches to AI, such as deep learning and neural networks. Second, neuroscience can provide validation of AI techniques that already exist. If a known algorithm is subsequently found to be implemented in the brain, then that is strong support for its plausibility as an integral component of an overall general intelligence system.

However, it’s in this point where scientists two trends differ. Researchers like Henry Markram, Dharmendra Modha, Stephen Larson are focused on the complete simulation of the brain, even from a biological point of view. They estimate that a million lines of codes are enough for this purpose. They are trying to replicate all the synapses, dendrites, axon firings so that they get to understand how the brain learns, gets information about the environment and even how to fight against mental diseases. However, in this article, I’ll talk more about some works which just try to reproduce the learning process, from an engineering point of view.

Reinforcement learning

Neuroscience has put back reinforcement learning (RL) into fashion again. It was never a trendy tool to observe: this algorithm is computationally very inefficient, and it requires hundreds or thousands of experiments to reach the optimal value. But this experimentation of different states and decisions is becoming a great advantage for the modelling and representation of some crucial human abilities.

For instance, RL has become a good representation of the learning of motor skills in animals and humans. It pursues the learning through repetition of an action. Just try to remember how you learned to ride on a bike, or swimming, or how you got used to knowing by heart the way back home in a new city. It is a sum of trial and error decisions in which the person weights the good decisions that made him achieve the equilibrium or get to the destiny.

Moreover, a great achievement has been the combination of deep neural networks with RL, which represents the usage of episodic memories. RL in this sense represents the natural learning of skills, for instance, the rules of a game. This information remains stored and next, it’s used by a neural network system that will use the allocated information from RL to obtain the optimum solution or will make sense of the past experiences to understand a process with a certain complexity. This usage of deep RL has proved to be valid to simulate how children gain experience and commonsense by interacting with the environment.

Finally, RL is also turning up to be the direction towards the capability of imagining and planning of people. Humans can forecast long-term future outcomes through simulation-based planning thanks to a model of the environment learned through experience.

Attention

Up until quite recently, most neural network models (typically convolutional) worked directly on entire images or video frames, with equal priority given to all image pixels at the earliest stage of processing. However, this is not how the brain works. Really, it focuses attention on moving objects, colours, or specific parts. Therefore, this kind of image recognition algorithms is implementing the attention, which also reduces their computational cost.

Continual learning

One of the main characteristics of the human brain is the ability to continuously learn without forgetting the previously acquired knowledge or skills. In the case of neural networks, until recently, every new piece of knowledge implied the retraining of the neural network and it was catastrophic for the relationships. This phenomenon is represented by the weights and bias, which represent the way to knowledge of a neural network. Now, researchers are developing a form of elastic weight consolidation in order to be able to use the same neural network system to learn different things without losing any information.

Efficient learning

Humans have a great ability to rapidly learn about new concepts from only a handful of examples, which makes knowledge and learning very flexible. This is a very hard task for AI. However, recent learning models are creating neural networks that learn. It can be easily understandable with the following example. A child has a natural ability to recognize different handwritten letters, even they are written by different persons and styles. Neural networks are adapting this effect, by leveraging prior experience with related problems, to support one-shot concept learning.

This is also related to how humans transfer learning. Normally, a person who knows how to use a laptop or drive a car can generally use an unfamiliar computer or vehicle.

Conclusions

It’s not only AI who will benefit from the neuroscience feedback. In the opposite direction, AI and mainly machine-learning algorithms transformed forever neuroscience and the tools to analyze MRI, make diagnoses out of big-data and develop new medicaments.

The new era of both sciences won’t be able to evolve one without the other 1.

References

  1. Hassabis, D., Kumaran, D., Summerfield, C., & Botvinick, M. (2017). Neuroscience-inspired artificial intelligence. Neuron, 95(2), 245-258 doi: 10.1016/j.neuron.2017.06.011

The post The convergence of neuroscience and artificial intelligence appeared first on Mapping Ignorance.

]]>
https://mappingignorance.org/2018/11/14/the-convergence-of-neuroscience-and-artificial-intelligence/feed/ 0
How to study the protein corona using fluorinated nanoparticles https://mappingignorance.org/2018/11/12/how-to-study-the-protein-corona-using-fluorinated-nanoparticles/ https://mappingignorance.org/2018/11/12/how-to-study-the-protein-corona-using-fluorinated-nanoparticles/#respond Mon, 12 Nov 2018 15:00:18 +0000 https://mappingignorance.org/?p=5665 Author: Mónica Carril is an Ikerbasque Research Associate at the Biophysics Institute CSIC-UPV/EHU.
When nanoparticles (NPs) get in contact with biological fluids such as blood, […]

The post How to study the protein corona using fluorinated nanoparticles appeared first on Mapping Ignorance.

]]>
Author: Mónica Carril is an Ikerbasque Research Associate at the Biophysics Institute CSIC-UPV/EHU.

When nanoparticles (NPs) get in contact with biological fluids such as blood, proteins present in it will adsorb on the surface of those NPs forming what is known as the protein corona. It is a dynamic process in equilibrium with the surrounding proteins and it may lead to drastic changes in the NPs. The protein corona masks the surface of NPs and alters their physicochemical properties for which it is a matter of concern in the field of nanomedicine. If a NP is designed for a biomedical application with a particular charge, ligand or targeting moiety, the presence of the corona may alter size and surface properties, modifying unwantedly the fate and excretion pathways of the NPs in vivo, most likely shifting them away from the desired target. 1

Illustration of protein corona formation. Reproduced with permission from Elsevier (Ref 1)

For these reasons, the protein corona has been extensively studied by multiple techniques. However, and in order to be analysed, NPs with protein corona usually have to be isolated from the protein solution leading to a loss of the equilibrium situation. Frequently, the protein corona is studied by measuring the size increase of the NPs in solution due to the layer of proteins attached onto them and one way to do so is by measuring the diffusion of NPs which can be correlated with their size. Several optical methods have been used to measure diffusion of NPs, however optical methods in complex media suffer from light scattering and cannot be used in turbid environments. [1] Of course, to be able to study protein corona formation in vivo and in real time, rather than trying to emulate it in the lab, would be major breakthrough. The use of non-optical methods such as Magnetic Resonance Spectroscopy (MRS) brings us closer to the actual in vivo and in situ protein corona formation evaluation.

In our paper,2 we describe diffusion measurements by Fluorine-based Nuclear Magnetic Resonance (19F NMR) spectroscopy as a non-optical based method which obtains diffusion information from fluorinated species without interference from the background, due to the natural absence of fluorine in biological fluids. We designed and prepared different water dispersible fluorinated NPs suitable for providing an adequate signal in 19F NMR. By exposing those fluorine-labelled NPs to mixtures of proteins, plasma, blood or cells, it was possible to measure their diffusion in equilibrium with the surrounding medium, without the need to isolate them.

In a typical experiment, the signal intensity decay due to the diffusion of NPs is recorded by 19F-based diffusion-ordered nuclear magnetic resonance spectroscopy (DOSY) experiment, subsequently fitted to a mono-exponential decay to obtain a value for the diffusion constant (D), which is used to calculate the hydrodynamic size (rh) of the NPs via the Einstein-Stokes relation. Initially and as a proof of concept, artificial coronas were prepared by chemically attaching an increasing number of proteins onto the surface of selected fluorinated NPs. As the number of proteins increased, so did the size of the resulting NP-complex, as obtained from diffusion measurements by 19F NMR.

Next, we tested our methodology to evaluate the formation of non-covalent coronas, i.e. spontaneously formed coronas by exposing one type of our fluorine-labelled NPs to increasing amounts of single plasma proteins, such as human serum albumin (HSA) or transferrin (TF). Proteins adsorbed onto the surface of our NPs leading to a size (rh) increase detectable by our method.

Size increase due to protein corona in covalently linked and spontaneous corona formation processes.

Finally, measurements in more realistic complex media were performed. Thus, several 19F-labelled NPs were mixed with real samples of human blood and human plasma, and diffusion measurements were done at 37 ºC, to mimic as much as possible physiological conditions. We noticed that depending on the different surfaces of the tested NPs we had a very different response in each medium. We observed a consistent size shrinkage for some NPs, but in other cases a size increase was detected. Interestingly, in all cases the data obtained after incubation with HSA, the most abundant protein in human plasma, plasma or blood were different among them for the same NP type. These data suggest that in vitro protein corona studies are only simulations that may give us a hint on the tendency of our NPs to adsorb proteins on their surface or not, but are unable to model the in vivo protein corona.

Size changes of different fluorinated NPs in the presence of HSA, plasma or blood.

Hence, it is important to advance in the knowledge of the real protein corona as it influences the fate of nanomaterials in vivo and their applications as potential therapeutic and diagnosis nanotools. The use of magnetic resonance as a diffusion measuring technique opens up the possibility to measure NPs size in vivo using an MRI scanner. It must be noted that this method is exclusively based on measuring diffusion constants to obtain size, and is insensitive to information on a molecular level or the proteins involved in the corona. Obviously, we envisage that interpretation of changes in hydrodynamic radii in vivo will not be straightforward given the complexity of a living being. Nonetheless, these measurements are a promising starting point for future monitoring of the geometry changes of NPs in vivo.

References

  1. Carolina Carrillo-Carrion, Mónica Carril, Wolfgang J. Parak. Techniques for the experimental investigation of the protein corona. Curr. Opin. Biotech. 2017, 46, 106. doi: 10.1016/j.copbio.2017.02.009.
  2. Mónica Carril, Daniel Padro, Pablo del Pino, Carolina Carrillo-Carrion, Marta Gallego, In situ detection of the protein corona in complex environments. Nat. Commun. 2017, 8, 1542. doi: 10.1038/s41467-017-01826-4.

The post How to study the protein corona using fluorinated nanoparticles appeared first on Mapping Ignorance.

]]>
https://mappingignorance.org/2018/11/12/how-to-study-the-protein-corona-using-fluorinated-nanoparticles/feed/ 0
MI weekly selection #305 https://mappingignorance.org/2018/11/11/mi-weekly-selection-205-2/ https://mappingignorance.org/2018/11/11/mi-weekly-selection-205-2/#respond Sun, 11 Nov 2018 15:00:55 +0000 https://mappingignorance.org/?p=5657 Astronomers identify star with clues to early days of universe
A nearby star may have been around since shortly after the Big Bang and could […]

The post MI weekly selection #305 appeared first on Mapping Ignorance.

]]>

Astronomers identify star with clues to early days of universe

A nearby star may have been around since shortly after the Big Bang and could help astronomers learn more about what the universe was like back then. The star, 2MASS J18082002-5104378 B, is thought to be approximately 13.5 billion years old.

Space.com

Bodies burn more calories later in the day

Calorie burn is about 10% greater in the late afternoon-early evening time period. Researchers sequestered volunteers in a windowless lab without access to phones or internet for over a month to measure their metabolic rate based on their internal clocks.

Live Science

Activity between hippocampus, amygdala may indicate worsening mood

Activity in a network that links the hippocampus and the amygdala may signal when someone’s mood is worsening. “When there’s a lot of activity in this network, mood is negative,” said study co-author Dr. Vikaas Sohal.

Scientific American

Scientists eye Weyl metals as way to study dynamos

Weyl metals, topological materials in which electrons behave in strange ways, may help researchers better understand dynamos, which produce Earth’s magnetic field. Scientists say they may be able to create a dynamo in the lab using Weyl metals.

Science News

Cannabis use disturbs adolescent brain maturation

Exposure to cannabis may affect how adolescent brains mature. Studies suggest that cannabis increases dopaminergic activity signals in the brain and reduces the quantity of inhibitory neurons in the prefrontal cortex.

The Scientist

The post MI weekly selection #305 appeared first on Mapping Ignorance.

]]>
https://mappingignorance.org/2018/11/11/mi-weekly-selection-205-2/feed/ 0
Buckyball difluoride, a single-molecule crystal https://mappingignorance.org/2018/11/08/buckyball-difluoride-a-single-molecule-crystal/ https://mappingignorance.org/2018/11/08/buckyball-difluoride-a-single-molecule-crystal/#respond Thu, 08 Nov 2018 15:00:37 +0000 https://mappingignorance.org/?p=5655 Endohedral fullerenes, also called endofullerenes, are fullerenes that have additional atoms, ions, or clusters enclosed within their inner spheres. The first lanthanum C60 complex […]

The post Buckyball difluoride, a single-molecule crystal appeared first on Mapping Ignorance.

]]>
Endohedral fullerenes, also called endofullerenes, are fullerenes that have additional atoms, ions, or clusters enclosed within their inner spheres. The first lanthanum C60 complex was synthesized in 1985 and called La@C60. The @ (at sign) in the name reflects the notion of a small molecule trapped inside a shell.

The chemistry of endohedral fullerenes is fascinating. Fullerenes interact with their encapsulated guest molecules either via noncovalent, dispersive and electrostatic forces or, because of fullerene’s high electron affinity, stabilize encapsulated cationic systems by forming covalent-type complexes with negatively charged cages and cationic endohedral guests (metals or metal clusters). Even some more exotic and unique bonding mechanisms between encapsulated species and fullerenes have been reported.

But all of the above assumes de buckyball (C60) is the electron-hungry partner and that the encapsulated molecule is an electron donor or, directly, a cation. But, could it be the other way around? Could we conceive of a system where the fullerene is an electron donor?

In spite of extensive studies conducted on diverse families of endohedral fullerenes, no system with an endohedral anion and a positively charged fullerene has been detected or proposed. Until now.

3D-map of spin density of buckyball difluoride in its triplet ground-state. Because of negligible covalent interaction between encapsulated F2 and C60+ cage that is a reminiscent of perfect ionic crystals, this molecule can be named a single-molecule crystal.

A team of researchers, incluiding DIPC’s Gernot Frenking, has studied 1 a series of halogen molecules encapsulated in C60, X2@C60 (X=F, Cl, Br, I). As a result, the researchers demonstrate that F2@C60 is an unprecedented molecular system with negatively charged F2 inside a positively charged C60+, which can be a viable synthetic target through fullerene surgery.

Energy decomposition analysis, in conjunction with natural orbitals for chemical valence computations were carried out by the team for the singlet and triplet states of F2@C60 using 2F2 and C60+as interacting fragments. An additional calculation was performed for the 3,1F2@C60 using neutral F2 and C60 as fragments. It turned out that the charged fragments 2F2 and C60+provide a better description for the bonding situation in F2@C60 than using the neutral fragments.

In other words, there is essentially no covalent bonding between 2F2 and C60+. We are before a unique bonding situation in F2@C60 in which an electron is transferred from the C60 cage, of all things an electron deficient system that normally accommodates up to 6 electrons to fill its shell, F2, thus forming the F2@C60+ system. This charge separation between C60 and F2 takes place without covalent bond formation, a situation which is otherwise only found in perfect ionic crystals. Thus, F2@C60 could be termed a single-molecule crystal compound.

As a side effect these results may shed a new light on the oxidation process of organic compounds by fluorine. It is generally believed that the fluorination reaction starts by the homolytic cleavage of F2 bond. In F2@C60, however, F2 efficiently abstracts one electron from electron deficient C60. This can be the case for less electron deficient compounds in the initiation step of the fluorination reaction.

Author: César Tomé López is a science writer and the editor of Mapping Ignorance.

References

  1. Cina Foroutan-Nejad, Michal Straka, Israel Fern#ndez, and Gernot Frenking (2018) Buckyball Difluoride F2@C60+ — A Single-Molecule Crystal Angewandte Chemie International Edition doi: 10.1002/anie.201809699

The post Buckyball difluoride, a single-molecule crystal appeared first on Mapping Ignorance.

]]>
https://mappingignorance.org/2018/11/08/buckyball-difluoride-a-single-molecule-crystal/feed/ 0
Open-sea experiments on a spar floating support for offshore wind turbines https://mappingignorance.org/2018/11/07/open-sea-experiments-on-a-spar-floating-support-for-offshore-wind-turbines/ https://mappingignorance.org/2018/11/07/open-sea-experiments-on-a-spar-floating-support-for-offshore-wind-turbines/#comments Wed, 07 Nov 2018 15:00:01 +0000 https://mappingignorance.org/?p=5649 Floating structures are becoming increasingly popular. They are used by several industries, some have been in business for decades, like those dedicated to oil and […]

The post Open-sea experiments on a spar floating support for offshore wind turbines appeared first on Mapping Ignorance.

]]>

Floating structures are becoming increasingly popular. They are used by several industries, some have been in business for decades, like those dedicated to oil and gas, some are newer, like renewables (wind, wave, tidal), and there are still some other uses, like ports, while there is a constant flow of new ideas that imply their use in a near future. In the case of the offshore wind industry, as there are several advantages in moving offshore wind energy production towards deep waters, including the availability of larger areas, stronger and steadier winds, and the reduction of visual and acoustic impact, some new challenges appear. The implementation of such concepts requires a significant amount of research in the development of reliable dynamic models, able to represent the coupled behaviour of the floating wind turbines. While such models are usually basically numerical codes, experimental activities play a crucial role for their validation.

How do you get experimental data for a huge off-shore wind turbine? Do you actually build one?Broadly speaking, there are two kind of experiments, namely small-scale and large-scale ones and, in some cases, yes, you build a full-scale wind turbine.

Traditional small-scale activities (1:50–1:100) are carried out in a controlled environment such as wave tanks and ocean basins, where the desired wind-wave conditions can be reproduced, to measure the dynamic response of the structure and to calibrate the numerical model. The good news is that this kind of setting, a controlled environment, allows to achieve very precise and reliable data, but at the cost of being relatively expensive (high rental fees of the basins is a main cost), a key factor for the duration of the experiments, and still limited in representing all the relevant physical phenomena at scale level, which may alter significantly the dynamic behaviour of the model with respect to the full-scale structure.

On the other hand, large-scale activities (1:1–1:10) are carried out in open sea and allow to represent all the relevant features of the offshore wind turbines, including turbine-support interaction, mooring system and grid connection, in relevant operational conditions. Clearly, such projects are even more expensive and usually represent pilot activities, which are carried out by big companies and/or public bodies for demonstration and commercial purposes, and whose results are rarely publicly available.

Up to now, several small-scale and large-scale experimental activities have been conducted on spar support structures for offshore wind turbines, aimed to prove the feasibility of the concept and validate the corresponding numerical models. But not all the data are freely available. For example, a full-scale prototype of a 2.3MW spar floating offshore wind turbine was installed in 2009 by Statoil, off the coast of Norway, on a water depth of about 200 m. The project, called “Hywind Demo”, has proved the technical feasibility of the spar configuration for floating offshore wind turbines, but neither the detailed design characteristics of the offshore wind turbine, nor the recorded field data are the property of the company and confidential.

In 2006, a 1:47 scale model of a 5-MW spar floating wind turbine was tested at the Ocean Basin Laboratory of Marintek, in Trondheim (Norway). The model was tested in irregular waves and turbulent wind speed and various control strategies were adopted. The experimental data showed relatively good agreement with the numerical results obtained in well-known models, but, again, some key information concerning the detailed characteristics of the exact model was not released.

In 2009 the US National Renewable Energy Laboratory (NREL) developed the specifications of a representative utility-scale multimegawatt turbine now known as the “NREL offshore 5-MW baseline wind turbine” to support concept studies aimed at assessing offshore wind technology. This wind turbine is a conventional three-bladed upwind variable-speed variable blade-pitch-to-feather-controlled turbine. The following year, the Offshore Code Comparison (OC3) project was established to verify the accuracy and correctness of the most commonly used numerical codes for coupled analysis of offshore wind turbines. Within this project, the OC3-Hywind spar buoy was defined as the reference spar concept designed to support the NREL-5MW reference offshore wind turbine. Since then, this concept has been widely used for experimental studies on offshore wind turbines, since Statoil’s Hywind characteristics are not released for public use.

Now, a team of researchers that includes Vincenzo Nava (BCAM & Tecnalia) has made public 1 the results of a open-sea intermediate size (1:30 scale) experiment on a spar floating support for offshore wind turbines, carried out off the Natural Ocean Engineering Laboratory (NOEL). These experiments were aimed at assessing some of the problems inherent to the traditional experimental activities in ocean basins, namely the feasibility of low-cost, intermediate-scale, open-sea activities on offshore structures, which are proposed to substitute or complement the traditional indoor activities.

The test site is located in the sea front of Reggio Calabria (Italy), on the eastern coast of Messina’s Strait. The site is particularly favourable for the selected case study, since it presents small wind-generated sea states with significant wave heights and peak periods and a variety of wave loading conditions over a relatively large frequency range. The support model tested is inspired in OC3-Hywind and is represented in parked rotor conditions.

Spar hull model before (left) and after (right) installation, sustaining the parked turbine model

The experiments carried out during the campaign were aimed at addressing and solving some of the problems inherent to the traditional experimental activities in ocean basins. In recognition of the fact that the well-known identification techniques adopted in indoor laboratories must be modified to work in a non-controlled marine environment, the researchers offer a wide overview about the requirements, test methodologies, instrumentation and identification methods necessary for operating dynamic identification of intermediate-scale models of offshore floating structures in open-sea conditions.

Numerical model of the spar structure in ANSYS AQWA

The results obtained from free decay tests and irregular wave tests performed on the 1:30 spar structure are presented in terms of response amplitude operators (RAOs), damping coefficients and significant motions in heave, roll and pitch, in order to calibrate a numerical model of the structure implemented through the software ANSYS AQWA. These experimental data represent valuable and original information on the hydrodynamic behaviour of the OC3-Hywind platform, since the numerical model has been implemented at 1:1 scale, which returns an immediate comparison with OC3-Hywind full-scale structure.

Even though the experimental RAOs obtained match well with the numerical predictions, and allow to calibrate the numerical model, the limited frequency range of the natural sea states recorded at the test site resulted in the impossibility of identifying the dynamics of the model over the complete frequency range. Due to this limitation, experimental RAOs estimation and numerical model calibration could be performed only for heave, roll and pitch degrees of freedom of the model.

All in all, the results show that the new approach may overcome some limitations of the traditional small-scale activities, namely high costs and small scale, and allows to enhance the fidelity of the experimental data currently available in literature for spar floating supports for offshore wind turbines.

Author: César Tomé López is a science writer and the editor of Mapping Ignorance.

References

  1. Carlo Ruzzo, Vincenzo Fiamma, Maurizio Collu, Giuseppe Failla, Vincenzo Nava, Felice Arena (2018) On intermediate-scale open-sea experiments on floating offshore structures: feasibility and application on a spar support for offshore wind turbines Marine Structures doi: 10.1016/j.marstruc.2018.06.002

The post Open-sea experiments on a spar floating support for offshore wind turbines appeared first on Mapping Ignorance.

]]>
https://mappingignorance.org/2018/11/07/open-sea-experiments-on-a-spar-floating-support-for-offshore-wind-turbines/feed/ 1
Unlocking graphene’s spintronic potential through spin-valley coupling https://mappingignorance.org/2018/11/05/unlocking-graphenes-spintronic-potential-through-spin-valley-coupling/ https://mappingignorance.org/2018/11/05/unlocking-graphenes-spintronic-potential-through-spin-valley-coupling/#comments Mon, 05 Nov 2018 15:00:49 +0000 https://mappingignorance.org/?p=5643 Author: José H. García is a postdoctoral researcher at the Catalan Institute of Nanoscience and Nanotechnology (ICN2).

Few materials have drawn as much attention as […]

The post Unlocking graphene’s spintronic potential through spin-valley coupling appeared first on Mapping Ignorance.

]]>
Author: José H. García is a postdoctoral researcher at the Catalan Institute of Nanoscience and Nanotechnology (ICN2).

Few materials have drawn as much attention as graphene, it fascinating attributes such as one-atom thickness and relativistic electrons, and its technological properties like transparency, large mechanical strength, and ultra-high electron’s mobility, position it as one of the more promising materials in the present. Recently, simultaneous experimental and theoretical studies have confirmed that combining graphene with a family of material known as transition metal dichalcogenides could enable the uses of graphene in the rising field of spintronics.

The spintronics is a branch of electronics that aims to use an intrinsically quantum property of the electrons known as spin, as the replacement of charge in low-power and faster digital devices. Among the many proposals for controlling the spin, the use of a relativistic effect known as the spin-orbit coupling is probably one of the most interesting, because it allows to electrically manipulating the spin by coupling it with its momentum, which can be controlled by the application of a voltage.

To the date, graphene’s low spin-orbit coupling positions it as the ideal spin wire, because electrons can rapidly propagate through it without altering its spin state. However, having a graphene-only spintronic device would enable to simplify the manufacturing process, while importing graphene’s exceptional properties. Different experiments performed in 2016 showed through indirect measurements that combining graphene with transition metal dichalcogenides could enhance its spin-orbit interaction by proximity-hybridization, which is a purely quantum process where the wave function of each material interacts between themselves making both materials to absorb part of each other properties. That same year, Prof. Luis Hueso’s group at CIC nanoGUNE performed the first experiment where the individual properties of each material where used for spin manipulation, by using an electric gate for tuning the spins flow between graphene and the TMDC they constructed a spin switch, a device which allows for turning on and off the spin current.

Last year, a conjoint effort between Prof. Stephan Roche and Prof. Sergio Valenzuela groups, both at the Catalan Institute of Nanoscience and Nanotechnology (ICN2), showed the first direct evidence of spin manipulation in graphene due to proximity induced spin-orbit coupling. Roche’s group predicted that due to the special characteristics of the proximity-induced spin-orbit coupling in these heterostructures, the electrons will relax faster when their spin lies on the graphene plane than when they are out of the plane, a phenomenon known as spin-lifetime anisotropy 1. Measuring this will be then, a clear signature of the spin-orbit in the system. Later that year, Valenzuela’s group performed the first experimental confirmation of the effect at room temperature, a result that has led to a lot of research focused on these materials2. In Figure 1 we show a schematic of the lateral device used in the experiment, and a cartoon of the spin-lifetime anisotropy.

Figure 1. Schematics of the lateral device used measuring the spin lifetime anisotropy. When the electrons are injected in the plane, they relax faster than for out-of-plane injection direction, and therefore, will quickly disappear after moving through the TMDC.

Why is this spin-orbit special?

The special thing about graphene/TMDs heterostructures is that, due to its structural properties, they also possess an additional degree of freedom beyond the spin known as the valley, and both are coupled by the spin-orbit interaction. In addition, when a metal is subjected to an external electric field, the electrons within it will move in the direction of the electric force, and their dynamics can be described by their dispersion relation, which is dependence between their energy and momentum p =ℏk. Due to the spin-orbit coupling, a change of momentum will also produce a variation of the electron’s spins which for these systems will also depend on the valley dynamics.

In Fig. 2 we show the band structure and spin textures for a single valley in two particular situations: Graphene on traditional substrates and graphene on WS2. For traditional substrates, possessing the typical Rashba spin-orbit coupling, the spin dynamics is the same in both valleys and therefore intervalley processes are not important. For the case of graphene/TMDs, the spin-valley coupling will impose the constraint that the spin texture has to have the opposite out-of-dynamics when changing valleys, as shown on the inset of the figure. In fact, the arrows in this inset denote the direction of the spin-orbit field for a particular energy, which defines a preferred direction for the spins. Therefore, a change of valley will invert the preferential out-of-plane direction leading to a change in the spin dynamics. This process generally occurs through intervalley scattering induced by disorder in the sample which, being stochastic; randomize the out-of-plane spin-orbit coupling field, which ultimately increases the in-plane relaxation time. This process is the source of the peculiar relaxation anisotropy.

The TMDs, possessing a strong spin-valley coupling, is the source of this behavior, and imprint it in the high-mobility graphene via a proximity-induced spin-orbit coupling, enabling it with a larger capacity for information encoding. These kinds of symbiotic relations are currently raising the hype of Van der Waals heterostructures as a groundbreaking platform for future spintronics devices.

Figure 2. Band structure of graphene on (a) a traditional rashbanic substrate, and on (b) transition metal dichalcogenide. The spin-texture for both cases correspondively (c) and (d) are also shown.

The broad number of experiments performed during these last two years where recently compiled into a comprehensive reviews written by Prof. Roche’s group, where all the experimental results where unified within a single theoretical framework, and new experiments for measuring additional spin-phenomena where also presented 3. We hope that this work can help to pave the way for unlocking graphene’s spintronics potential in the following years.

References

  1. L. Antonio Benítez et al. Strongly anisotropic spin relaxation in graphene–transition metal dichalcogenide heterostructures at room temperature, Nature Physics (2017). DOI: 10.1038/s41567-017-0019-2
  2. Aron W. Cummings, José H. Garcia, J. Fabian and Stephan Roche, Giant Spin Lifetime Anisotropy in Graphene Induced by Proximity Effects, Physical Review Letters (2017). DOI: 10.1103/PhysRevLett.119.206601
  3. Jose H. Garcia, Marc Vila, Aron W. Cummings, and Stephan Roche. Spin transport in graphene/transition metal dichalcogenide heterostructures, Chemical Society Reviews (2018). DOI: 10.1039/C7CS00864C

The post Unlocking graphene’s spintronic potential through spin-valley coupling appeared first on Mapping Ignorance.

]]>
https://mappingignorance.org/2018/11/05/unlocking-graphenes-spintronic-potential-through-spin-valley-coupling/feed/ 1