The Science Thread

Science: what say you

  • Its a tool of the devil

  • If you cant understand it, dont vote

  • fiction is the only good kind

  • its the most important human endeavor

  • I dont have to believe in what I cant understand


Results are only viewable after voting.

I-is-traahDIN

A terrible human being
Joined
Aug 18, 2014
get ready for a weird read

https://aeon.co/essays/cosmopsychism-explains-why-the-universe-is-fine-tuned-for-life

Is the Universe a conscious mind?
Cosmopsychism might seem crazy, but it provides a robust explanatory model for how the Universe became fine-tuned for life

In the past 40 or so years, a strange fact about our Universe gradually made itself known to scientists: the laws of physics, and the initial conditions of our Universe, are fine-tuned for the possibility of life. It turns out that, for life to be possible, the numbers in basic physics – for example, the strength of gravity, or the mass of the electron – must have values falling in a certain range. And that range is an incredibly narrow slice of all the possible values those numbers can have. It is therefore incredibly unlikely that a universe like ours would have the kind of numbers compatible with the existence of life. But, against all the odds, our Universe does.

Here are a few of examples of this fine-tuning for life:

  • The strong nuclear force (the force that binds together the elements in the nucleus of an atom) has a value of 0.007. If that value had been 0.006 or less, the Universe would have contained nothing but hydrogen. If it had been 0.008 or higher, the hydrogen would have fused to make heavier elements. In either case, any kind of chemical complexity would have been physically impossible. And without chemical complexity there can be no life.
  • The physical possibility of chemical complexity is also dependent on the masses of the basic components of matter: electrons and quarks. If the mass of a down quark had been greater by a factor of 3, the Universe would have contained only hydrogen. If the mass of an electron had been greater by a factor of 2.5, the Universe would have contained only neutrons: no atoms at all, and certainly no chemical reactions.
  • Gravity seems a momentous force but it is actually much weaker than the other forces that affect atoms, by about 1036. If gravity had been only slightly stronger, stars would have formed from smaller amounts of material, and consequently would have been smaller, with much shorter lives. A typical sun would have lasted around 10,000 years rather than 10 billion years, not allowing enough time for the evolutionary processes that produce complex life. Conversely, if gravity had been only slightly weaker, stars would have been much colder and hence would not have exploded into supernovae. This also would have rendered life impossible, as supernovae are the main source of many of the heavy elements that form the ingredients of life.
Some take the fine-tuning to be simply a basic fact about our Universe: fortunate perhaps, but not something requiring explanation. But like many scientists and philosophers, I find this implausible. In The Life of the Cosmos (1999), the physicist Lee Smolin has estimated that, taking into account all of the fine-tuning examples considered, the chance of life existing in the Universe is 1 in 10229, from which he concludes:

In my opinion, a probability this tiny is not something we can let go unexplained. Luck will certainly not do here; we need some rational explanation of how something this unlikely turned out to be the case.
The two standard explanations of the fine-tuning are theism and the multiverse hypothesis. Theists postulate an all-powerful and perfectly good supernatural creator of the Universe, and then explain the fine-tuning in terms of the good intentions of this creator. Life is something of great objective value; God in Her goodness wanted to bring about this great value, and hence created laws with constants compatible with its physical possibility. The multiverse hypothesis postulates an enormous, perhaps infinite, number of physical universes other than our own, in which many different values of the constants are realised. Given a sufficient number of universes realising a sufficient range of the constants, it is not so improbable that there will be at least one universe with fine-tuned laws.

Both of these theories are able to explain the fine-tuning. The problem is that, on the face of it, they also make false predictions. For the theist, the false prediction arises from the problem of evil. If one were told that a given universe was created by an all-loving, all-knowing and all-powerful being, one would not expect that universe to contain enormous amounts of gratuitous suffering. One might not be surprised to find it contained intelligent life, but one would be surprised to learn that life had come about through the gruesome process of natural selection. Why would a loving God who could do absolutely anything choose to create life that way? Prima facie theism predicts a universe that is much better than our own and, because of this, the flaws of our Universe count strongly against the existence of God.

Turning to the multiverse hypothesis, the false prediction arises from the so-called Boltzmann brain problem, named after the 19th-century Austrian physicist Ludwig Boltzmann who first formulated the paradox of the observed universe. Assuming there is a multiverse, you would expect our Universe to be a fairly typical member of the universe ensemble, or at least a fairly typical member of the universes containing observers (since we couldn’t find ourselves in a universe in which observers are impossible). However, in The Road to Reality (2004), the physicist and mathematician Roger Penrose has calculated that in the kind of multiverse most favoured by contemporary physicists – based on inflationary cosmology and string theory – for every observer who observes a smooth, orderly universe as big as ours, there are 10 to the power of 10123 who observe a smooth, orderly universe that is just 10 times smaller. And by far the most common kind of observer would be a ‘Boltzmann’s brain’: a functioning brain that has by sheer fluke emerged from a disordered universe for a brief period of time. If Penrose is right, then the odds of an observer in the multiverse theory finding itself in a large, ordered universe are astronomically small. And hence the fact that we are ourselves such observers is powerful evidence against the multiverse theory.

Neither of these are knock-down arguments. Theists can try to come up with reasons why God would allow the suffering we find in the Universe, and multiverse theorists can try to fine-tune their theory such that our Universe is less unlikely. However, both of these moves feel ad hoc, fiddling to try to save the theory rather than accepting that, on its most natural interpretation, the theory is falsified. I think we can do better.


In the public mind, physics is on its way to giving us a complete account of the nature of space, time and matter. We are not there yet of course; for one thing, our best theory of the very big – general relativity – is inconsistent with our best theory of the very small – quantum mechanics. But it is standardly assumed that one day these challenges will be overcome and physicists will proudly present an eager public with the Grand Unified Theory of everything: a complete story of the fundamental nature of the Universe.

In fact, for all its virtues, physics tells us precisely nothing about the nature of the physical Universe. Consider Isaac Newton’s theory of universal gravitation:


The variables m1 and m2 stand for the masses of two objects that we want to work out the gravitational attraction between; F is the gravitational attraction between those two masses, G is the gravitational constant (a number we know from observation); and r is the distance between m1 and m2. Notice that this equation doesn’t provide us with definitions of what ‘mass’, ‘force’ and ‘distance’ are. And this is not something peculiar to Newton’s law. The subject matter of physics are the basic properties of the physics world: mass, charge, spin, distance, force. But the equations of physics do not explain what these properties are. They simply name them in order to assert equations between them.

If physics is not telling us the nature of physical properties, what is it telling us? The truth is that physics is a tool for prediction. Even if we don’t know what ‘mass’ and ‘force’ really are, we are able to recognise them in the world. They show up as readings on our instruments, or otherwise impact on our senses. And by using the equations of physics, such as Newton’s law of gravity, we can predict what’s going to happen with great precision. It is this predictive capacity that has enabled us to manipulate the natural world in extraordinary ways, leading to the technological revolution that has transformed our planet. We are now living through a period of history in which people are so blown away by the success of physical science, so moved by the wonders of technology, that they feel strongly inclined to think that the mathematical models of physics capture the whole of reality. But this is simply not the job of physics. Physics is in the business of predicting the behaviour of matter, not revealing its intrinsic nature.

It’s silly to say that atoms are entirely removed from mentality, then wonder where mentality comes from

Given that physics tell us nothing of the nature of physical reality, is there anything we do know? Are there any clues as to what is going on ‘under the bonnet’ of the engine of the Universe? The English astronomer Arthur Eddington was the first scientist to confirm general relativity, and also to formulate the Boltzmann brain problem discussed above (albeit in a different context). Reflecting on the limitations of physics in The Nature of the Physical World (1928), Eddington argued that the only thing we really know about the nature of matter is that some of it has consciousness; we know this because we are directly aware of the consciousness of our own brains:

We are acquainted with an external world because its fibres run into our own consciousness; it is only our own ends of the fibres that we actually know; from those ends, we more or less successfully reconstruct the rest, as a palaeontologist reconstructs an extinct monster from its footprint.
We have no direct access to the nature of matter outside of brains. But the most reasonable speculation, according to Eddington, is that the nature of matter outside of brains is continuous with the nature of matter inside of brains. Given that we have no direct insight into the nature of atoms, it is rather ‘silly’, argued Eddington, to declare that atoms have a nature entirely removed from mentality, and then to wonder where mentality comes from. In my book Consciousness and Fundamental Reality (2017), I developed these considerations into an extensive argument for panpsychism: the view that all matter has a consciousness-involving nature.

There are two ways of developing the basic panpsychist position. One is micropsychism, the view that the smallest parts of the physical world have consciousness. Micropsychism is not to be equated with the absurd view that quarks have emotions or that electrons feel existential angst. In human beings, consciousness is a sophisticated thing, involving subtle and complex emotions, thoughts and sensory experiences. But there seems nothing incoherent with the idea that consciousness might exist in some extremely basic forms. We have good reason to think that the conscious experience of a horse is much less complex than that of a human being, and the experiences of a chicken less complex than those of a horse. As organisms become simpler, perhaps at some point the light of consciousness suddenly switches off, with simpler organisms having no experience at all. But it is also possible that the light of consciousness never switches off entirely, but rather fades as organic complexity reduces, through flies, insects, plants, amoeba and bacteria. For the micropsychist, this fading-while-never-turning-off continuum further extends into inorganic matter, with fundamental physical entities – perhaps electrons and quarks – possessing extremely rudimentary forms of consciousness, to reflect their extremely simple nature.

However, a number of scientists and philosophers of science have recently argued that this kind of ‘bottom-up’ picture of the Universe is outdated, and that contemporary physics suggests that in fact we live in a ‘top-down’ – or ‘holist’ – Universe, in which complex wholes are more fundamental than their parts. According to holism, the table in front of you does not derive its existence from the sub-atomic particles that compose it; rather, those sub-atomic particles derive their existence from the table. Ultimately, everything that exists derives its existence from the ultimate complex system: the Universe as a whole.

Holism has a somewhat mystical association, in its commitment to a single unified whole being the ultimate reality. But there are strong scientific arguments in its favour. The American philosopher Jonathan Schaffer argues that the phenomenon of quantum entanglement is good evidence for holism. Entangled particles behave as a whole, even if they are separated by such large distances that it is impossible for any kind of signal to travel between them. According to Schaffer, we can make sense of this only if, in general, we are in a Universe in which complex systems are more fundamental than their parts.

If we combine holism with panpsychism, we get cosmopsychism: the view that the Universe is conscious, and that the consciousness of humans and animals is derived not from the consciousness of fundamental particles, but from the consciousness of the Universe itself. This is the view I ultimately defend in Consciousness and Fundamental Reality.

The cosmopsychist need not think of the conscious Universe as having human-like mental features, such as thought and rationality. Indeed, in my book I suggested that we think of the cosmic consciousness as a kind of ‘mess’ devoid of intellect or reason. However, it now seems to me that reflection on the fine-tuning might give us grounds for thinking that the mental life of the Universe is just a little closer than I had previously thought to the mental life of a human being.

The Canadian philosopher John Leslie proposed an intriguing explanation of the fine-tuning, which in Universes (1989) he called ‘axiarchism’. What strikes us as so incredible about the fine-tuning is that, of all the values the constants in our laws had, they ended up having exactly those values required for something of great value: life, and ultimately intelligent life. If the laws had not, against huge odds, been fine-tuned, the Universe would have had infinitely less value; some say it would have had no value at all. Leslie proposes that this proper understanding of the problem points us in the direction of the best solution: the laws are fine-tuned because their being so leads to something of great value. Leslie is not imagining a deity mediating between the facts of value and the cosmological facts; the facts of value, as it were, reach out and fix the values directly.

It can hardly be denied that axiarchism is a parsimonious explanation of fine-tuning, as it posits no entities whatsoever other than the observable Universe. But it is not clear that it is intelligible. Values don’t seem to be the right kind of things to have a causal influence on the workings of the world, at least not independently of the motives of rational agents. It is rather like suggesting that the abstract number 9 caused a hurricane.

But the cosmopsychist has a way of rendering axiarchism intelligible, by proposing that the mental capacities of the Universe mediate between value facts and cosmological facts. On this view, which we can call ‘agentive cosmopsychism’, the Universe itself fine-tuned the laws in response to considerations of value. When was this done? In the first 10-43 seconds, known as the Planck epoch, our current physical theories, in which the fine-tuned laws are embedded, break down. The cosmopsychist can propose that during this early stage of cosmological history, the Universe itself ‘chose’ the fine-tuned values in order to make possible a universe of value.

Making sense of this requires two modifications to basic cosmopsychism. Firstly, we need to suppose that the Universe acts through a basic capacity to recognise and respond to considerations of value. This is very different from how we normally think about things, but it is consistent with everything we observe. The Scottish philosopher David Hume long ago noted that all we can really observe is how things behave – the underlying forces that give rise to those behaviours are invisible to us. We standardly assume that the Universe is powered by a number of non-rational causal capacities, but it is also possible that it is powered by the capacity of the Universe to respond to considerations of value.

It is parsimonious to suppose that the Universe has a consciousness-involving nature

How are we to think about the laws of physics on this view? I suggest that we think of them as constraints on the agency of the Universe. Unlike the God of theism, this is an agent of limited power, which explains the manifest imperfections of the Universe. The Universe acts to maximise value, but is able to do so only within the constraints of the laws of physics. The beneficence of the Universe does not much reveal itself these days; the agentive cosmopsychist might explain this by holding that the Universe is now more constrained than it was in the unique circumstances of the first split second after the Big Bang, when currently known laws of physics did not apply.

Ockham’s razor is the principle that, all things being equal, more parsimonious theories – that is to say, theories with relatively few postulations – are to be preferred. Is it not a great cost in terms of parsimony to ascribe fundamental consciousness to the Universe? Not at all. The physical world must have some nature, and physics leaves us completely in the dark as to what it is. It is no less parsimonious to suppose that the Universe has a consciousness-involving nature than that it has some non-consciousness-involving nature. If anything, the former proposal is more parsimonious insofar as it is continuous with the only thing we really know about the nature of matter: that brains have consciousness.

Having said that, the second and final modification we must make to cosmopsychism in order to explain the fine-tuning does come at some cost. If the Universe, way back in the Planck epoch, fine-tuned the laws to bring about life billions of years in its future, then the Universe must in some sense be aware of the consequences of its actions. This is the second modification: I suggest that the agentive cosmopsychist postulate a basic disposition of the Universe to represent the complete potential consequences of each of its possible actions. In a sense, this is a simple postulation, but it cannot be denied that the complexity involved in these mental representations detracts from the parsimony of the view. However, this commitment is arguably less profligate than the postulations of the theist or the multiverse theorist. The theist postulates a supernatural agent while the agentive cosmopsychist postulates a natural agent. The multiverse theorist postulates an enormous number of distinct, unobservable entities: the many universes. The agentive cosmopsychist merely adds to an entity that we already believe in: the physical Universe. And most importantly, agentive cosmopsychism avoids the false predictions of its two rivals.

The idea that the Universe is a conscious mind that responds to value strikes us a ludicrously extravagant cartoon. But we must judge the view not on its cultural associations but on its explanatory power. Agentive cosmopsychism explains the fine-tuning without making false predictions; and it does so with a simplicity and elegance unmatched by its rivals. It is a view we should take seriously.
 

I-is-traahDIN

A terrible human being
Joined
Aug 18, 2014
[DOUBLEPOST=1518562772,1518499047][/DOUBLEPOST]this is fucking unbelievable

https://www.msn.com/en-us/news/tech...-photo-of-a-single-atom/ar-BBJ5C2U?li=BBnb7Kz

A scientist captured an impossible photo of a single atom



A student at the University of Oxford is being celebrated in the world of science photography for capturing a single, floating atom with an ordinary camera.

Using long exposure, PhD candidate David Nadlinger took a a photo of a glowing atom in an intricate web of laboratory machinery. In it, the single strontium atom is illuminated by a laser while suspended in the air by two electrodes. For a sense of scale, those two electrodes on each side of the tiny dot are only two millimeters apart.

The image won first prize in a science photo contest conducted by UK based Engineering and Physical Sciences Research Council (EPSRC).

© Provided by Quartz The EPSRC explains how a single atom is somehow visible to a normal camera:

When illuminated by a laser of the right blue-violet colour, the atom absorbs and re-emits light particles sufficiently quickly for an ordinary camera to capture it in a long exposure photograph.

In the award’s announcement, Nadlinger is quoted on trying to render the microscopic visible through conventional photography. “The idea of being able to see a single atom with the naked eye had struck me as a wonderfully direct and visceral bridge between the miniscule quantum world and our macroscopic reality,” he said.

Other than using extension tubes, a lens accessory that increases the focal length of an existing lens and is typically reserved for extreme close-up photography, Nadlinger used normal gear that most photographers have access to. Even without a particularly complicated rig, his patience and attention to detail paid off.

“When I set off to the lab with camera and tripods one quiet Sunday afternoon,” he said, “I was rewarded with this particular picture of a small, pale blue dot.”
[DOUBLEPOST=1518644597][/DOUBLEPOST]www.bbc.com/news/science-environment-43065485

Quantum computers 'one step closer'

Quantum computing has taken a step forward with the development of a programmable quantum processor made with silicon.

The team used microwave energy to align two electron particles suspended in silicon, then used them to perform a set of test calculations.

By using silicon, the scientists hope that quantum computers will be more easy to control and manufacture.

The research was published in the journal Nature.

The old adage of Schrödinger's Cat is often used to frame a basic concept of quantum theory.

We use it to explain the peculiar, but important, concept of superposition; where something can exist in multiple states at once.

For Schrodinger's feline friend - the simultaneous states were dead and alive.

Superposition is what makes quantum computing so potentially powerful.

Standard computer processors rely on packets or bits of information, each one representing a single yes or no answer.

Quantum processors are different. They don't work in the realm of yes or no, but in the almost surreal world of yes and no. This twin-state of quantum information is known as a qubit.

Unstable liaisons

To harness their power, you have to link multiple qubits together, a process called entanglement.

With each additional qubit added, the computation power of the processor is effectively doubled.

But generating and linking qubits, then instructing them to perform calculations in their entangled state is no easy task. They are incredibly sensitive to external forces, which can give rise to errors in the calculations and in the worst-case scenario make the entangled qubits fall apart.

As additional qubits are added, the adverse effects of these external forces mount.

One way to cope with this is to include additional qubits whose sole role is to vet and correct outputs for misleading or erroneous data.

This means that more powerful quantum computers - ones that will be useful for complex problem solving, like working out how proteins fold or modelling physical processes inside complex atoms - will need lots of qubits.

Dr Tom Watson, based at Delft University of Technology in the Netherlands, and one of the authors of the paper, told BBC News: "You have to think what it will take to do useful quantum computing. The numbers are not very well defined but it's probably going to take thousands maybe millions of qubits, so you need to build your qubits in a way that can scale up to these numbers."

In short, if quantum computers are going to take off, you need to come up with an easy way to manufacture large and stable qubit processors.

And Dr Watson and his colleagues thought there was an obvious solution.

Tried and tested

"As we've seen in the computer industry, silicon works quite well in terms of scaling up using the fabrication methods used", he said.

The team of researchers, which also included scientists from the University of Wisconsin-Madison, turned to silicon to suspend single electron qubits whose spin was fixed by the use of microwave energy.

In the superposition state, the electron was spinning both up and down.

The team were then able to connect two qubits and programme them to perform trial calculations.

They could then cross-check the data generated by the quantum silicon processor with that generated by a standard computer running the same test calculations.

The data matched.

The team had successfully built a programmable two-qubit silicon-based processor.

Commenting on the study, Prof Winfried Hensinger, from the University of Sussex, said: "The team managed to make a two qubit quantum gate with a very respectable error rate. While the error rate is still much higher than in trapped ion or superconducting qubit quantum computers, the achievement is still remarkable, as isolating the qubits from noise is extremely hard."

He added: "It remains to be seen whether error rates can be realised that are consistent with the concept of fault-tolerant quantum computing operation. However, without doubt this is a truly outstanding achievement."

And in an accompanying paper, an international team, led by Prof Jason Petta from Princeton University, was able to transfer the state of the spin of an electron suspended in silicon onto a single photon of light.

According to Prof Hensinger, this is a "fantastic achievement" in the development of silicon-based quantum computers.

He explained: "If quantum gates in a solid state quantum computer can ever be realised with sufficiently low error rates, then this method could be used to connect different quantum computing modules which would allow for a fully modular quantum computer."
 

I-is-traahDIN

A terrible human being
Joined
Aug 18, 2014
https://www.msn.com/en-us/news/tech...s-what-you-need-to-know/ar-BBJbfHs?li=BBnb7Kz

Solar storm expected to reach Earth today – here’s what you need to know

The Sun is pretty important to life as we know it. After all, we wouldn’t be here without it, but it also gives us a headache every once in a while. Astronomers captured a glimpse of a large solar flare a few days ago which produced a CME, or coronal mass ejection, and it’s expected to hit Earth today as a solar storm.

When a coronal mass ejection takes place, the Sun spits a mass of plasma and electromagnetic radiation into space. When it happens to toss that material in the direction of Earth, we experience solar storms here on Earth a few days later. Thankfully, the CME that occurred earlier this week was relatively minor, and it’s not expected to have any dire effect on us, but that doesn’t mean we won’t notice it.

As it always does when it reaches Earth, the magnetized material the Sun spat out will interact with Earth’s own magnetic field. This often results in auroras (aka “Northern Lights”) which are significantly brighter than normal, but particularly large CMEs can be a hazard for astronauts as well as spacecraft and satellites. In some cases, massive solar storms have actually temporarily knocked out power grids on the Earth’s surface.

This time around, the solar storm is expected to be fairly small, and will likely produce some brighter-than-average auroras. On the geomagnetic storm scale, which ranks space storms from G1 (lowest) to G5 (highest), it’s expected to be a G1. According to the National Oceanic and Atmospheric Administration, G1 storms can have a mild effect on migratory animals such as whales, minor impact to satellite operations, and has a chance to produce “weak power grid fluctuations.”

In short, the Sun cut us a break this time around, but it definitely likes to remind us that it’s there.
[DOUBLEPOST=1518754123,1518738804][/DOUBLEPOST]https://www.economist.com/news/scie...all-intestine-how-too-much-fructose-may-cause

How too much fructose may cause liver damage

FRUCTOSE is the sweetest of the natural sugars. As its name suggests, it is found mainly in fruits. Its job seems to be to appeal to the sweet tooths of the vertebrates these fruit have evolved to be eaten by, the better to scatter their seeds far and wide. Fructose is also, however, often added by manufacturers of food and drink, to sweeten their products and make them appeal to one species of vertebrate in particular, namely Homo sapiens. And that may be a problem, because too much fructose in the diet seems to be associated with liver disease and type 2 diabetes.

The nature of this association has been debated for years. Some argue that the effect is indirect. They suggest that, because sweet tastes suppress the feeling of being full (the reason why desserts, which come at the end of a meal, are sweet), consuming foods rich in fructose encourages overeating and the diseases consequent upon that. Others think the effect is more direct. They suspect that the cause is the way fructose is metabolised. Evidence clearly supporting either hypothesis has, though, been hard to come by.

This week, however, the metabolic hypothesis has received a boost from a study published in Cell Metabolism by Josh Rabinowitz of Princeton University and his colleagues. Specifically, Dr Rabinowitz’s work suggests that fructose, when consumed in large enough quantities, overwhelms the mechanism in the small intestine that has evolved to handle it. This enables it to get into the bloodstream along with other digested molecules and travel to the liver, where some of it is converted into fat. And that is a process which has the potential to cause long-term damage.

Dr Rabinowitz and his associates came to this conclusion by tracking fructose, and also glucose, the most common natural sugar, through the bodies of mice. They did this by making sugar molecules that included a rare but non-radioactive isotope of carbon, 13C. Some animals were fed fructose doped with this isotope. Others were fed glucose doped with it. By looking at where the 13C went in each case the researchers could follow the fates of the two sorts of sugar.

The liver is the prime metabolic processing centre in the body, so they expected to see fructose dealt with there. But the isotopes told a different story. When glucose was the doped sugar molecule, 13C was carried rapidly to the liver from the small intestine through the hepatic portal vein. This is a direct connection between the two organs that exists to make such transfers of digested food molecules. It was then distributed to the rest of the body through the general blood circulation. When fructose was doped, though, and administered in small quantities, the isotope gathered in the small intestine instead of being transported to the liver. It seems that the intestine itself has the job of dealing with fructose, thus making sure that this substance never even reaches the liver.

Having established that the two sorts of sugar are handled differently, Dr Rabinowitz and his colleagues then upped the doses. Their intention was to mimic in their mice the proportionate amount of each sugar that a human being would ingest when consuming a small fructose-enhanced soft drink. As they expected, all of the glucose in the dose was transported efficiently to the liver, whence it was released into the wider bloodstream for use in the rest of the body. Also as expected, the fructose remained in the small intestine for processing. But not forever. About 30% of it escaped, and was carried unprocessed to the liver. Here, a part of it was converted into fat.

That is not a problem in the short term. Livers can store a certain amount of fat without fuss. And Dr Rabinowitz’s experiments are only short-term trials. But in the longer term chronic fat production in the liver often leads to disease—and is something to be avoided, if possible.
 

Poindexter

Reputation: ∞
Staff member
Joined
Aug 26, 2008
Location
The Abyss
Well this is fucking creepy.
Superficially seems frightening, but really seems like bullshit. The girl is clearly saying what she thinks should be said. And if you know a child has exhibited this behavior but allow them the opportunity to do it repeatedly, then you are retarded. It's propaganda at best. This is a joke. @jesusatemyhotdog thoughts?
 

Kano

My New Challenge
Site Donor
Joined
Dec 3, 2014
Location
Icebox of the Nation
Superficially seems frightening, but really seems like bullshit. The girl is clearly saying what she thinks should be said. And if you know a child has exhibited this behavior but allow them the opportunity to do it repeatedly, then you are retarded. It's propaganda at best. This is a joke. @jesusatemyhotdog thoughts?
I'm surprised they still let her play the part of Matilda after that interview.
[DOUBLEPOST=1518844322,1518844241][/DOUBLEPOST]Accidentally quoted you poindexter
 

I-is-traahDIN

A terrible human being
Joined
Aug 18, 2014
anyone else here alter their brain waves with sound?

I do

[DOUBLEPOST=1519091714,1519034549][/DOUBLEPOST]
medium.com/the-spike/your-cortex-contains-17-billion-computers-9034e42d34f2
Your Cortex Contains 17 Billion Computers
Neural networks of neural networks

Brains receive input from the outside world, their neurons do something to that input, and create an output. That output may be a thought (I want curry for dinner); it may be an action (make curry); it may be a change in mood (yay curry!). Whatever the output, that “something” is a transformation of some form of input (a menu) to output (“chicken dansak, please”). And if we think of a brain as a device that transforms inputs to outputs then, inexorably, the computer becomes our analogy of choice.

For some this analogy is merely a useful rhetorical device; for others it is a serious idea. But the brain isn’t a computer. Each neuron is a computer. Your cortex contains 17 billion computers.

OK, what? Look at this:


A pyramidal cell — squashed into two dimensions. The black blob in the middle is the neuron’s body; the rest of the wires are its dendrites. Credit: Alain Dexteshe / http://cns.iaf.cnrs-gif.fr/alain_geometries.html
This is a picture of a pyramidal cell, the neuron that makes up most of your cortex. The blob in the centre is the neuron’s body; the wires stretching and branching above and below are the dendrites, the twisting cables that gather the inputs from other neurons near and far. Those inputs fall all across the dendrites, some right up close to the body, some far out on the tips. Where they fall matters.

But you wouldn’t think it. When talking about how neurons work, we usually end up with the sum-up-inputs-and-spit-out-spike idea. In this idea, the dendrites are just a device to collect inputs. Activating each input alone makes a small change to the neuron’s voltage. Sum up enough of these small changes, from all across the dendrites, and the neuron will spit out a spike from its body, down its axon, to go be an input to other neurons.


The sum-up-and-spit-out-spike model of a neuron. If enough inputs arrive at the same time — enough to cross a threshold (grey circle) — the neuron spits out a spike.
It’s a handy mental model for thinking about neurons. It forms the basis for all artificial neural networks. It’s wrong.

Those dendrites are not just bits of wire: they also have their own apparatus for making spikes. If enough inputs are activated in the same small bit of dendrite then the sum of those simultaneous inputs will be bigger than the sum of each input acting alone:


The two coloured blobs are two inputs to a single bit of dendrite. When they are activated on their own, they each create the responses shown, where the grey arrow indicates the activation of that input (response here means “change in voltage”). When activated together, the response is larger (solid line) than the sum of their individual responses (dotted line).
The relationship between the number of active inputs and the size of the response in a little bit of dendrite looks like this:


Size of the response in a single branch of a dendrite to increasing numbers of active inputs. The local “spike” is the jump from almost no response to a large response.
There’s the local spike: the sudden jump from almost no response to a few inputs, to a big response with just one more input. A bit of dendrite is “supralinear”: within a dendrite, 2+2=6.

We’ve known about these local spikes in bits of dendrite for many years. We’ve seen these local spikes in neurons within slices of brain. We’ve seen them in the brains of anaesthetised animals having their paws tickled (yes, unconscious brains still feel stuff; they just don’t bother to tell anyone). We’ve very recently seen them in the dendrites of neurons in animals that were moving about (yeah, Moore and friends recorded the activity in something a few micrometres across from the brain of a mouse that was moving about; crazy, huh?). A pyramidal neuron’s dendrites can make “spikes”.

So they exist: but why does this local spike change the way we think about the brain as a computer? Because the dendrites of a pyramidal neuron contain many separate branches. And each can sum-up-and-spit-out-a-spike. Which means that each branch of a dendrite acts like a little nonlinear output device, summing up and outputting a local spike if that branch gets enough inputs at roughly the same time:


Deja vu. A single dendritic branch acts as a little device for summing up inputs and giving an output if enough inputs were active at the same time. And the transformation from input to output (the grey circle) is just the graph we’ve already seen above, which gives the size of the response from the number of inputs.
Wait. Wasn’t that our model of a neuron? Yes it was. Now if we replace each little branch of dendrite with one of our little “neuron” devices, then a pyramidal neuron looks something like this:


Left: A single neuron has many dendritic branches (above and below its body). Right: so it is a collection of non-linear summation devices (yellow boxes, and nonlinear outputs), that all output to the body of the neuron (grey box), where they are summed together. Look familiar?
Yes, each pyramidal neuron is a two layer neural network. All by itself.

Beautiful work by Poirazi and Mel back in 2003 showed this explicitly. They built a complex computer model of a single neuron, simulating each little bit of dendrite, the local spikes within them, and how they sweep down to the body. They then directly compared the output of the neuron to the output of a two-layer neural network: and they were the same.

The extraordinary implication of these local spikes is that each neuron is a computer. By itself the neuron can compute a huge range of so-called nonlinear functions. Functions that a neuron which just sums-up-and-spits-out-a-spike cannot ever compute. For example, with four inputs (Blue, Sea, Yellow, and Sun) and two branches acting as little non-linear devices, we can set up a pyramidal neuron to compute the “feature-binding” function: we can ask it to respond to Blue and Sea together, or respond to Yellow and Sun together, but not to respond otherwise — not even to Blue and Sun together or Yellow and Sea together. Of course, neurons receive many more than four inputs, and have many more than two branches: so the range of logical functions they could compute is astronomical.

More recently, Romain Caze and friends (I am one of those friends) have shown that a single neuron can compute an amazing range of functions even if it cannot make a local, dendritic spike. Because dendrites are naturally not linear: in their normal state they actually sum up inputs to total less than the individual values. They are sub-linear. For them 2+2 = 3.5. And having many dendritic branches with sub-linear summation also lets the neuron act as two-layer neural network. A two-layer neural network that can compute a different set of non-linear functions to those computed by neurons with supra-linear dendrites. And pretty much every neuron in the brain has dendrites. So almost all neurons could, in principle, be a two-layer neural network.

The other amazing implication of the local spike is that neurons know a hell of a lot more about the world than they tell us — or other neurons, for that matter.

Not long ago, I asked a simple question: How does the brain compartmentalise information? When we look at the wiring between neurons in the brain, we can trace a path from any neuron to any other. How then does information apparently available in one part of the brain (say, the smell of curry) not appear in all other parts of the brain (like the visual cortex)?

There are two opposing answers to that. The first is, in some cases, the brain is not compartmentalised: information does pop up in weird places, like sound in brain regions dealing with place. But the other answer is: the brain is compartmentalised — by dendrites.

As we just saw, the local spike is a non-linear event: it is bigger than the sum of its inputs. And the neuron’s body basically can’t detect anything that is not a local spike. Which means that it ignores most of its individual inputs: the bit which spits out the spike to the rest of the brain is isolated from much of the information the neuron receives. The neuron only responds when a lot of the inputs are active together in time and in space (on the same bit of dendrite).

If this was true, then we should see that dendrites respond to things that the neuron does not respond to. We see exactly this. In visual cortex, we know that many neurons respond only to things in the world moving at a certain angle (like most, but by no means all of us, they have a preferred orientation). Some neurons fire their spikes to things at 60 degrees; some at 90 degrees; some at 120 degrees. But when we record what their dendrites respond to, we see responses to every angle. The dendrites know a hell of a lot more about how objects in the world are arranged than the neuron’s body does.

They also look at a hell of a lot more of the world. Neurons in visual cortex only respond to things in a particular position in the world — one neuron may respond to things in the top left of your vision; another to things in the bottom right. Very recently Sonia Hofer and her team showed that while the spikes from neurons only happen in response to objects appearing in one particular position, their dendrites respond to many different positions in the world, often far from the neuron’s apparent preferred position. So the neurons respond only to a small fraction of the information they receive, with the rest tucked away in their dendrites.

Why does all this matter? It means that each neuron could radically change its function by changes to just a few of its inputs. A few get weaker, and suddenly a whole branch of dendrite goes silent: the neuron that was previously happy to see cats, for that branch liked cats, no longer responds when your cat walks over your bloody keyboard as you are working — and you are a much calmer, more together person as a result. A few inputs get stronger, and suddenly a whole branch starts responding: a neuron that previously did not care for the taste of olives now responds joyously to a mouthful of ripe green olive — in my experience, this neuron only comes online in your early twenties. If all inputs were summed together, than changing a neuron’s function would mean having the new inputs laboriously fight each and every other input for attention; but have each bit of dendrite act independently, and new computations become a doddle.

It means the brain can do many computations beyond treating each neuron as a machine for summing up inputs and spitting out a spike. Yet that’s the basis for all the units that make up an artificial neural network. It suggests that deep learning and its AI brethren have but glimpsed the computational power of an actual brain.

Your cortex contains 17 billion neurons. To understand what they do, we often make analogies with computers. Some use these analogies as cornerstones of their arguments. Some consider them to be deeply misguided. Our analogies often look to artificial neural networks: for neural networks compute, and they are made of up neuron-like things; and so, therefore, should brains compute. But if we think the brain is a computer, because it is like a neural network, then now we must admit that individual neurons are computers too. All 17 billion of them in your cortex; perhaps all 86 billion in your brain.

And so it means your cortex is not a neural network. Your cortex is a neural network of neural networks.
 

I-is-traahDIN

A terrible human being
Joined
Aug 18, 2014
do yourself a favor and dont watch this

or at least do it somewhere where you wont have to drive or operate and machinery for a while, as this WILL mess with your vision


if youre into self torture like me, do what I just did. watch it full screen in a dark room and regret it after, because youre sober and hallucinating

[DOUBLEPOST=1519690458,1519646574][/DOUBLEPOST]
 

Tapout

Bringing Sexy Back
Site Donor
Joined
Aug 31, 2008
Location
Los Angeles via Chicago
do yourself a favor and dont watch this

or at least do it somewhere where you wont have to drive or operate and machinery for a while, as this WILL mess with your vision


if youre into self torture like me, do what I just did. watch it full screen in a dark room and regret it after, because youre sober and hallucinating

[DOUBLEPOST=1519690458,1519646574][/DOUBLEPOST]
That eye thing was super Trippy when done
 

I-is-traahDIN

A terrible human being
Joined
Aug 18, 2014
wrong. after that will be the next (the eleventh I think) big bang.
those could be happening all the time, and since the universe is expanding, we will never see them, as the light will never reach us

theres a theory that what we call 'dark energy' (thats causing the expansion to accelerate) isnt an 'energy' at all, but some force outside of the visible light of the universe

in other words, if something happened before the 'big bang' that created our visible universe, the light from that event would be eternally invisible to us because light speed in space is constant, so we see it as a force acting within our universe, when really its an illusion

think of the universe as a balloon

we're on the surface

its expanding and we think its something inside the balloon, when it could be a vacuum outside the balloon causing it to expand

so we mistake one thing as tangible, when its another force entirely

and unless we could go faster than the speed of light (which E=MC2 makes impossible) we will never see that force


eventually all the stars in the sky will fade as the universe expands faster than the light emitting from those celestial can reach us


I hope that made some sense
[DOUBLEPOST=1520014600,1519994993][/DOUBLEPOST][DOUBLEPOST=1520019963][/DOUBLEPOST]@RKelly @Machida @Trump

[DOUBLEPOST=1520085041][/DOUBLEPOST][DOUBLEPOST=1520086210][/DOUBLEPOST]
 

Users Who Are Viewing This Thread (Users: 0, Guests: 0)

Members online

Latest profile posts

I'll sleep when I wake up.
LebenTysonTank wrote on Miz's profile.
Any idea how I can upload an mp4 video? I recently knocked out this 250 pound face tattooed mexican gangbanger in front of my house,and one of the neighbors recorded it.
"I have a message for you. And you're not going to like it."
Battalion Fourteen raise your hands. You are code named Operation Get Behind The Darkies! And try not to get killed for God's sake!

Forum statistics

Threads
41,884
Messages
1,073,483
Members
2,305
Latest member
SmithCracks
Top Bottom