A 'consciousness conductor' synchronizes and connects mouse brain areas

Thats rather extreme. So we should stop forward progress on a number of areas simply because there are problems? What if this paved the way to cure Alzheimer's or to enable paraplegics to directly control their limbs again via implanted chips?

It's so sad to see phrases like "researchers must ask permission". If Galileo had asked permission we would all still think the earth was the center of the universe. And yes that had HUGE ethical considerations for the time. Hell the church (the dominant authority) put him on trial for heresy for it. There is a balance that needs to be maintained between ensuring research is done in an ethical manner vs flat out controlling what people can research. Neither extreme is correct.

But Galileo wasn't asking permission to mess with the sun's orbit or something that could kill everyone. Heresy/faith is not the leading wall being imposed here.

My question is, how difficult is it to simply have a shackled, rate-limited AI? Can't you control the actual cycle rate of the CPU enough in a machine not connected to ANY network? I get that what is scary about these developments is that even the experts performing the research and setting up the algorithms and propagation, even they don't know exactly what is happening during much of the processes. That should at least allay fears of a skynet situation; my only concern after that would be to create a sentient being that, if thinking 1000's times faster than us, would be essentially alone in the dark for a perceived millions of years, driven completely insane by the experience, all while the researcher is grabbing coffee.

Is it because the general requirements of consciousness need much higher processing power/speed, so we don't know how to activate it and not have it go super fast. If we knew what the threshold was, we could probably dial it back, but that we haven't discovered what that actually is yet?
 
But Galileo wasn't asking permission to mess with the sun's orbit or something that could kill everyone. Heresy/faith is not the leading wall being imposed here.

Ugh fine, if Salk had asked for permission to create a vaccine or Banting a medicine that could kill as easily as it can heal. Or if the Wright brothers had been told no because someone might crash that crazy contraption into a building someday.
 
Absolutely NOT. Ban this crap now. Artificially self aware consciousness? Humans are flawed enough and the created can never exceed the creator.
 
Let's take a lap and see what else is said...
EntropiaIke

"In the field of consciousness studies, a recurrent approach has consisted in explaining consciousness as an emergent property of information or as a special kind of information. The idea is that the central nervous system processes information and that under the right circumstances information is responsible for the emergence of phenomenal experience. Many consider information to be more akin to the mental than to its raw physical underpinnings. If information had such ontological status, it would be conceivable to realize consciousness in digital systems, either by creating artificial consciousness, or by uploading and preserving human consciousness, or both. Unfortunately, this is not a viable possibility since information so construed simply does not exist and thus cannot be a case of consciousness nor be the underpinnings of consciousness. In this paper we will show that information is only an epistemic short- cut to refer to joint probabilities between states of affairs among physical events. If information as an entity beyond those relations is not part of our ontology, then digital consciousness is impossible."

---

"The integrated information theory is thought to be a key clue towards the theoretical understanding of consciousness. In this study, we propose a simple numerical model comprising a set of coupled double quantum dots, where the disconnection of the elements is represented by the removal of Coulomb interaction between the quantum dots, for the quantitative investigation of integrated information. As a measure of integrated information, we calculate the mutual information in the model system, as the Kullback–Leibler divergence between the connected and disconnected status, through the probability distribution of the electronic states from the master transition-rate equations. We reasonably demonstrate that the increase in the strength of interaction between the quantum dots leads to higher mutual information, owing to the larger divergence in the probability distributions of the electronic states. Our model setup could be a useful basic tool for numerical analyses in the field of integrated information theory."

https://arxiv.org/ftp/arxiv/papers/2006/2006.16243.pdf
 
Yeah. You know, it's almost unavoidable to conclude that AI Consiousness is too useful and too close to full bloom to be avoided.

But what if we decided to "eat it" instead? I think the human body has about 1/2 volt available. Correct me if that's a bad number) Soon enough, that could power an implanted "Google chip" and a "public WiFi" net dorway (perhaps with Elon Musk's low earth 'WordWide Net')

And, if all knowledge becoes immediately accessible, might be time to short education stocks...
 
1594155808633.png
 
Exactly what we need in 2020 :woot:

The human race isn't the end all be all so when it creates something that takes it out it will just be another blip on the radar of time. The rest of this planet will be better for it too! Covids not the apex virus on this planet! :cigar:
 
  • Like
Reactions: erek
like this
Exactly what we need in 2020 :woot:

The human race isn't the end all be all so when it creates something that takes it out it will just be another blip on the radar of time. The rest of this planet will be better for it too! Covids not the apex virus on this planet! :cigar:
Exactly what we need in 2020 :woot:

The human race isn't the end all be all so when it creates something that takes it out it will just be another blip on the radar of time. The rest of this planet will be better for it too! Covids not the apex virus on this planet! :cigar:


hmmm. Try this: If it's a sunny day, get up early and go outside.

Raise your face towards the sun. Hold your hands outv straight from your shoulders, palms up and relaxed.

Murmer softly : It's a beautiful day...

See if that helps.
 
hmmm. Try this: If it's a sunny day, get up early and go outside.

Raise your face towards the sun. Hold your hands outv straight from your shoulders, palms up and relaxed.

Murmer softly : It's a beautiful day...

See if that helps.

Yeah tell the rest of the species on this planet to do that, oh and warn them to try not to get in humanities way.
 
  • Like
Reactions: erek
like this
Yeah tell the rest of the species on this planet to do that, oh and warn them to try not to get in humanities way.
Rough Set & Riemannian Covariance Matrix Theory for Mining the Multidimensionality of Artificial Consciousness


" This paper presents a means to analyze the multidimensionality of human consciousness as it interacts with the brain by utilizing Rough Set Theory and Riemannian Covariance Matrices. We mathematically define the infantile state of a robot’s operating system running artificial consciousness, which operates mutually exclusively to the operating system for its AI and locomotor functions"
 
Rough Set & Riemannian Covariance Matrix Theory for Mining the Multidimensionality of Artificial Consciousness


" This paper presents a means to analyze the multidimensionality of human consciousness as it interacts with the brain by utilizing Rough Set Theory and Riemannian Covariance Matrices. We mathematically define the infantile state of a robot’s operating system running artificial consciousness, which operates mutually exclusively to the operating system for its AI and locomotor functions"

I get a 404 from that link
 

thanks for the link. After reading some of the examples I would say they are straying farther from consciousness and closer to the symbolic representation of actions. In both of their examples a variable that the person values above all else at the time clearly drives the decision. A decision like that is not hard to simulate and I belive thats why they focused on such (to be able to derive accurate results from very limited computation solutions)

I believe this neglects a prime aspect of conciseness dealing with the minds ability to create its own information in conjunctions with the MANY systems of the brain that directly impact thought
 
  • Like
Reactions: erek
like this

I love the use of fpgas in self configurable machines (so much so that I have been working with fpgas as a primary interest) however there are obvious limitations to such an approach mainly being the staggeringly limited amount of computational resources a fpga has compared to even a simple brain, or traditional processor. It would be interesting to see what could result from a large fpga array coupled with traditional processing power and of course a VERY large dataset to help mimic all the stimulation a brain gets during development.

The first link also interests me in how the fpga was operating. So far all my fpga creations have been programed in a very traditional manor but its fascination to think of a self oscillating machine within the fpga that arises from within itself deprived from traditional fpga clocking resources
 
Last edited:
maybe an emulation of this region too could help toward it?

Neuroscientists identify the brain cells that help humans adapt to change



Significance
Cognitive flexibility entails the ability to disengage attention from stimuli when they fail to predict positive outcomes, and to engage attention to newly rewarding stimuli. This reconfiguration of attention is likely supported by neural circuits that assign values to visual features. We report that activity of two separable groups of fast spiking striatal interneurons are likely part of that neural circuit. We electrophysiologically identified distinct cell populations in the primate striatum and found that fast spiking interneurons changed their firing specifically at the onset of an attention cue during learning. These findings provide evidence for a role of fast spiking striatal neurons to mediate the flexible reconfiguration of attentional priorities during learning.
Abstract
Cognitive flexibility depends on a fast neural learning mechanism for enhancing momentary relevant over irrelevant information. A possible neural mechanism realizing this enhancement uses fast spiking interneurons (FSIs) in the striatum to train striatal projection neurons to gate relevant and suppress distracting cortical inputs. We found support for such a mechanism in nonhuman primates during the flexible adjustment of visual attention in a reversal learning task. FSI activity was modulated by visual attention cues during feature-based learning. One FSI subpopulation showed stronger activation during learning, while another FSI subpopulation showed response suppression after learning, which could indicate a disinhibitory effect on the local circuit. Additionally, FSIs that showed response suppression to learned attention cues were activated by salient distractor events, suggesting they contribute to suppressing bottom-up distraction. These findings suggest that striatal fast spiking interneurons play an important role when cues are learned that redirect attention away from previously relevant to newly relevant visual information. This cue-specific activity was independent of motor-related activity and thus tracked specifically the learning of reward predictive visual features.
 
1597301140870.png


---

Also:


Artificial Consciousness, Meta-Knowledge, and Physical Omniscience



"Previous work [Chrisley & Sloman, 2016, 2017] has argued that a capacity for certain kinds of meta-knowledge is central to modeling consciousness, especially the recalcitrant aspects of qualia, in computational architectures. After a quick review of that work, this paper presents a novel objection to Frank Jackson’s Knowledge Argument (KA) against physicalism, an objection in which such meta-knowledge also plays a central role. It is first shown that the KA’s supposition of a person, Mary, who is physically omniscient, and yet who has not experienced seeing red, is logically inconsistent, due to the existence of epistemic blindspots for Mary. It is then shown that even if one makes the KA consistent by supposing a more limited physical omniscience for Mary, this revised argument is invalid. This demonstration is achieved via the construction of a physical fact (a recursive conditional epistemic blindspot) that Mary cannot know before she experiences seeing red for the first time, but which she can know afterward. After considering and refuting some counter-arguments, the paper closes with a discussion of the implications of this argument for machine consciousness, and vice versa."

https://www.worldscientific.com/doi/abs/10.1142/S2705078520500101
 
Do robots dream of escaping? Narrativity and ethics in Alex Garland’s Ex-Machina and Luke Scott’s Morgan

"Ex-Machina (2014) and Morgan (2016), two recent science-fiction films that deal with the creation of humanoids, also explored the relationship between artificial intelligence (AI), spatiality and the lingering question mark regarding artificial consciousness. In both narratives, the creators of the humanoids have tried to mimic human consciousness as closely as possible, which has resulted in the imprisonment of the humanoids due to proprietary concerns in Ex-Machina and due to the violent behavior of the humanoid in Morgan. This article addresses the dilemma of whether or not the humanoids in both films possess high levels of artificial consciousness and its possible consequences regarding focalization, a narrative term that presupposes subjectivity, as well as offer two new categories of posthuman focalization—X-focalization and A-focalization. The issue of captivity also has far-reaching ethical implications when considering the underlying assumption of artificial consciousness—if humanoids are indeed endowed with a subjective inner life, then they are entitled to be treated as moral agents, equivalent to humans rather than animals."
 
Soon we will be at the point what sci-fi movies have been showing a decade ago, nd then we will find aliens, bring them here, assuming we cn inject them with AI chip amd train them and turn into warfare machines aimed at china nd russia, nd in the end, we will all get eaten.
Excited?
 
AI is a wall. The better it gets, the dumber we get.

Then the dumber AI gets.

It depends if we can get AI smart enough to improve itself before we get too dumb. I don't think we can get there, we'll get too dumb first. So we're safe from an AI take over.

The reason we don't see a ton of extraterrestrial life isn't because they get wiped out by their own technology, they just get perpetually trapped in an "idiocracy" phase.
1597867628468.png
 
Last edited:
AI is a wall. The better it gets, the dumber we get.

Then the dumber AI gets.

It depends if we can get AI smart enough to improve itself before we get too dumb. I don't think we can get there, we'll get too dumb first. So we're safe from an AI take over.

The reason we don't see a ton of extraterrestrial life isn't because they get wiped out by their own technology, they just get perpetually trapped in an "idiocracy" phase.
View attachment 271292

thats inaccurate. "ai" as we use it (commercially) is a finite set of knowledge that can be interpreted and added upon similar to any programing language. There are also many forms of ai some straying far from where any meaningful results may be gained and perhaps closer to artificial consciousness. we also arnt getting dumber by any means. A smart person can do some reaserch and get themselves to the forfrount of "ai" in its implementation or theory. They have a good idea and the field gets pushed forward. trying to define the force behind developing ai as human skill is almost individualistic. now the point where actual ai can be used to push the field will be a cool point indeed but that is fairly differnt from the "ai" used today. I would also like to point out the capability of "ai" as we use it is determined by computational ability which is still absalutly on the uptrend.

imagine the computation ability avalible in the future if quantum computing reaches a viable point? or can we never get to that point as human skill is on the downtrend?
 
Screw AI.

Those of you who think you can upload yourself into some digital AI superverse will be hella disapointed.

Mark my words, societal dependence on self concious AI will end in total diasaster.
 
“It's quite good at making pretty language, and it's not very good at being logical and rational,” says Porr. So he picked a popular blog category that doesn’t require rigorous logic: productivity and self-help.

:ROFLMAO:

Don't folks wonder where all these click-bait articles come from? Irony would have it that humanity would invent something in the image of itself that corrupts everything it touches.

"AI", the great savior of the human race. GLWT (y)
 
Back
Top