Windows 11’s AI-powered Copilot (and its Bing-powered ads) enters public preview

So do you agree to launch this "copilot" or is it on by default and you have to find where the setting is to disable it?
 
They wanted 30 dollars per person each month with a minimum of 300 people to get the "preview" into our tenant. If we licensed everyone in the company it would be 1.9 million a year for copilot.
 
Last edited:
Must loose a little fortune by clients at that cheap of pricing.

Maybe copilot is more token heavy too and $30 is more than $20 a month

https://mspoweruser.com/report-microsoft-losing-money-on-each-github-copilot-customer/
It was revealed that, on average, GitHub was losing more than $20 per user each month earlier this year

They must think that their system price will be going down fast (GPT 4 turbo is 3 time cheaper than the non turbo version that was quite recent) and that it need to be subsidized to attract people at the moment.
 
Last edited:
Damnit I wish Macs weren't so shit for gaming.
only a matter of time till they have it too, and if you use edge it's already in that...

its already in edge...

1699560742102.png
 
So do you agree to launch this "copilot" or is it on by default and you have to find where the setting is to disable it?
It's opt out, so it's on by default after the update gets installed. It's not an app, so there is no way to uninstall it. The only thing to do at the moment is turn it off, but who knows how long Microsoft will let you do that.
 

For fucks sake.

I can understand wanting to keep up with modern features. They need to do this for those who want it, but why they keep insisting on shoving unwanted features down people's throats is beyond me.

At this point it feels like they are actively trying to piss people off and get them to abandon Windows.

Make it an installable option or something. That way the people who want it can get it, and those of us who don't never have to see it.
 
Last edited:
So the request has already rolled in that I release and license us for Copilot, well the request was for ChatGPT but as both GPT and CoPilot are based on the same OpenAI tech and similar training libraries I am far more inclined for Copilot than I am getting the SSO in place for ChatGPT, probably better volume licensing for access to Copilot too so might as well give it a go. Now to see what the licensing comes in at.
 
There is also that Microsoft supports all Office 365 functionality in Windows 10 until Oct 14, 2025.
So Microsoft can't really not roll out a new feature in Office 365 (which Copilot is) and not give it Windows 10 support as it would violate their existing support contracts.
 
only a matter of time till they have it too, and if you use edge it's already in that...

It's not the AI itself. It's the implementation. Microsoft will push that shit into everything while Apple will make it unobtrusive, totally easy to ignore. Siri's spiffy new brain will start out great, get all the way to 85% of incredible and then be left in a hermetically sealed mayonnaise jar on a shelf in a closet at Funk and Wagnalls. Forgotten. Quietly patched out ten years later.
 
. Microsoft will push that shit into everything while Apple will make it unobtrusive
People use AI everyday on their IPhone without calling it that, I can easily see that.

Quietly patched out ten years later.
Machine learning used to understand speech, generative AI to do text to speech, image correction-stabilisation, pathfind for maps, pre give a relevant name to picture instead of anonymous long numbers, edit picture by voice command including remove X, change Y by Z, etc.. even agents that speech to hotel-restaurant agents to do reservation it will never be patched out.

People will simply like they always did, start to call it computer again and reserve the word AI for stuff computer have yet to do well.
 
They wanted 30 dollars per person each month with a minimum of 300 people to get the "preview" into our tenant. If we licensed everyone in the company it would be 1.9 million a year for copilot.
Woof, Microsoft must think they are not only Apple, but also IBM/Oracle/Adobe all at once with licensing like that.
 
People use AI everyday on their IPhone without calling it that, I can easily see that.


Machine learning used to understand speech, generative AI to do text to speech, image correction-stabilisation, pathfind for maps, pre give a relevant name to picture instead of anonymous long numbers, edit picture by voice command including remove X, change Y by Z, etc.. even agents that speech to hotel-restaurant agents to do reservation it will never be patched out.

People will simply like they always did, start to call it computer again and reserve the word AI for stuff computer have yet to do well.
I know, I'm mostly being flippant about the hype and hysteria over machine learning.

I do genuinely fear the idea of search results being obfuscated by your friendly AI assistant that happens to be completely under the control of a corporation. It's like Google's gradual poisoning the internet has taught us nothing.
 
Just upgraded to Win11 23H2 yesterday. No sign of copilot, FWIW. Hit Windows Update a couple of times but it only offered stuff like AV updates.
 
I got notification it’s not available yet for Enterprise builds. So I can’t add it to my existing plans so my rep is now trying to find me options.
 
People use AI everyday on their IPhone without calling it that, I can easily see that.


Machine learning used to understand speech, generative AI to do text to speech, image correction-stabilisation, pathfind for maps, pre give a relevant name to picture instead of anonymous long numbers, edit picture by voice command including remove X, change Y by Z, etc.. even agents that speech to hotel-restaurant agents to do reservation it will never be patched out.

People will simply like they always did, start to call it computer again and reserve the word AI for stuff computer have yet to do well.

I have all this shit disabled on my phone, and I have never even once used any kind of "voice assistant". That's disabled too.

If I am not manually dealing with all the details, then I don't trust that it is right. I don't trust other people doing things for me, and I sure as hell don't trust some ridiculous language model.

I use my computers and phones like it is 2014, and as long as I can I will continue to do so. Absolutely everything cloud and everything AI gets disabled, and if it can't be disabled, then I just don't use the product. I want absolutely none of this what so ever.

I quit a job once because they wanted me to use sharepoint.

And it makes me absolutely livid - like foaming at the mouth furious - that all of these god awful tech companies are trying to force this shit on me.
 
It's opt out, so it's on by default after the update gets installed. It's not an app, so there is no way to uninstall it. The only thing to do at the moment is turn it off, but who knows how long Microsoft will let you do that.
Debloating tools like NTLite and MSMG Toolkit are currently being updated for 23H2, to be able to remove Copilot from the install ISO. Copilot likely falls onto the growing list of Windows components that are not uninstallable after the fact, and the only way to avoid them is never let them install to begin with. As the saying goes, "the best way to survive a plane crash is be on the ground when it happens."

But that does mean having to do a full reinstall to run a non-copilot 23H2.
 
Last edited:
I would like to see a list of what organizations are actually planning on using this for organizational data in support of decisionmaking so I can avoid them.

It's only a matter of time u til this AI nonsense makes a catastrophic business ending statistical assumption, and I don't want to be working at or invested in any of those businesses when it happens.

You cannot trust AI. It's just a series of statistically based guesses. It is not a substitute for rational decisionmaking.
 
I would like to see a list of what organizations are actually planning on using this for organizational data in support of decisionmaking so I can avoid them.

It's only a matter of time u til this AI nonsense makes a catastrophic business ending statistical assumption, and I don't want to be working at or invested in any of those businesses when it happens.

You cannot trust AI. It's just a series of statistically based guesses. It is not a substitute for rational decisionmaking.
Data analysis is the same with people. The difference is people, while having bias, still has intuition and foresight. AI doing data analysis will be doing cold analysis with whatever bias demanded by the executives hard-coded into the algorithm. Such a thing will accelerate the death by data epidemic we're currently dealing with.
 
Humans make company ending bad decisions all the time. Humans struggle to order a donut in drivethrough in the morning or after finding out they are getting divorced go on to make corporate decisions. Removing peoples useless and impeding emotions from data analysis is a far better path.
 
The number of humans who actually change their minds based on explicit data is pretty small. Most go by their gut which is in fact making statistical guesses based on its training data.

But programming based on a gut instinct about which Function you should build next is a risk process, unless you have been building the same thing the same way for a decade.
 
Humans make company ending bad decisions all the time. Humans struggle to order a donut in drivethrough in the morning or after finding out they are getting divorced go on to make corporate decisions. Removing peoples useless and impeding emotions from data analysis is a far better path.
The number of humans who actually change their minds based on explicit data is pretty small. Most go by their gut which is in fact making statistical guesses based on its training data.

But programming based on a gut instinct about which Function you should build next is a risk process, unless you have been building the same thing the same way for a decade.

Humans do make mistakes, but humans also have the ability to catch many of the really stupid ones. A black box language model not so much.

Anyone who uses AI for corporate analysis really shouldn't be working at all. It is one of the most profoundly stupid things I have seen in my life. (But not as stupid as some of the mistakes AI can make)

I fear that Microsoft - by pushing this shit out in this fashion - is normalizing not thinking, and just typing a stupid query into an AI text box, and if that happens, god help us all.

If I were king for a day, I'd even go so far as to completely ban all applications of AI. I'm OK with and even in favor of Machine Learning models where they find potential patterns for humans to further investigate, but AI making decisions is fantastically dangerous. Even the minutest decision made should always be made by a human, and it needs to be made based on data, information and analysis that human actually understands, not some black box model.

The danger of AI is not the oft lampooned "robots are going to kill us all", but rather the danger is "people mistaking AI for actual intelligence and substituting it for rational thought on their own part because they are being lazy and don't want to work."

And seriously. Fuck the enablers of that dystopian future. Fuck every last researcher and business training and shoving AI models into their products for people to use for stupidity. And fuck the people who implement those systems in any business. Any IT person worth their salt should be blocking the likes of Co-pilot and Bing Chat on every single system in their environment to make sure absolutely no one uses it to take dangerous shortcuts. Even on the CEO and executives machines. Actually especially on the CEO and Executives machines.

The problem is, people will start using it for a few simple low risk things, and it will likely get it right, lulling them into a false sense of security, until eventually they start blindly using it, the way Tesla drivers feel comfortable reading the paper or sleeping while their cars Autopilot feature drives them around.

It's only a matter of time before it goes horribly wrong. Hopefully it will just be some big bank being over the legal lending limit and people getting fired as a result and not things like buildings or bridges collapsing and planes falling out of the sky.

If we took this seriously right now, every business ought to have training sessions with all of their employees at every level throughout their organizations on the dangers of trusting AI, and why they should never use it for either business, financial or engineering analysis. This will all end badly, and it is SO predictable and guaranteed that it will.

Artificial intelligence is not intelligence. it's just like a toddler mimicking the motions of an adult without any of the actual understanding behind it.
 
Last edited:
If I were king for a day, I'd even go so far as to completely ban all applications of AI. I'm OK with and even in favor of Machine Learning models where they find potential patterns for humans to further investigate, but AI making decisions is fantastically dangerous
I mean you are not serious and you say you are in favor of what you would ban but this (if just even a bit serious) is ridiculous.

Banning video game NPC, video stabilization, computer playing chess, how will we define AI in a way that will not just ban pretty much all computer application ?
 
I mean you are not serious and you say you are in favor of what you would ban but this (if just even a bit serious) is ridiculous.

Banning video game NPC, video stabilization, computer playing chess, how will we define AI in a way that will not just ban pretty much all computer application ?

That's not the type of AI we are discussing here. No one is using Bing Chat or Co-Pilot to govern game NPC behavior. It is intended for a much more sinister application, which is processing knowledge and making decisions based off of it.

Applications that have real works implications - however - should ideally be off the table.
 
That's not the type of AI we are discussing here. Applications that have real works implications - however - should ideally be off the table.
Which type of AI we are talking about then ? That a bit the point, it will be really hard to define any ban on AI.

If we talk about real works-world implication, say a machine learning system that help to detect from a giant amount of drone picture of a crop field a potential issue, send the picture with a warning to someone, that should be off the table ? What about speech to text ? System that detect child pornographic content ?
 
I dunno. GPT 4 isn't nearly as dumb as some of my friends, and isn't going to choose windows for a desktop for example. Or get knocked up by a dipshit. Or buy a house with a mortgage right before interest rates are obviously set to climb for decades.
 
I dunno. GPT 4 isn't nearly as dumb as some of my friends, and isn't going to choose windows for a desktop for example. Or get knocked up by a dipshit. Or buy a house with a mortgage right before interest rates are obviously set to climb for decades.

I bet they don't run businesses either, or do any kind of engineering testing :p
 
Here’s what got added to my licenses today
View attachment 613563

Did you submit the form for access, or they just added it on their own?

Porting over to it from OpenAI takes like 5 minutes if you've done any work there. It's just a new API endpoint and key, and a couple very, very minor code changes. Same SDK.
 
Which type of AI we are talking about then ? That a bit the point, it will be really hard to define any ban on AI.

If we talk about real works-world implication, say a machine learning system that help to detect from a giant amount of drone picture of a crop field a potential issue, send the picture with a warning to someone, that should be off the table ? What about speech to text ? System that detect child pornographic content ?

I would draw the line at anything that has real world implications. You want to develop AI for an in game character, or your own art project, fine, I have no complaints.

You want to use it for business data to run a business, or test products before launch, or anything else that touches the real world, I think I would want that to come to a screeching halt.

I'm also extremely concerned about military and law enforcement uses. Again, real world.

Even in arts and entertainment there are real concerns. AI is not Intelligence, and as such it cannot now, nor will it ever be able to create anything of its own. Everything it produces is an amalgamation of others works. I'd insist that AI for art/creativity purposes must only be trained on content that you have the unambiguous ownership or use rights of, otherwise its output becomes IP theft.

I would also argue that datasets used for training - if they contain any personal information of any human being at all, would need each individuals express written consent, or the model should be destroyed.

It should also not be OK to train a model on just anything that is publicly available on the internet. People who publish that information should have a say in how it is used. My making posts on a public forum - for instance - does not include granting consent to others to analyze user or train an AI model with it. It is posted for one purpose only, and that is for discussion with other fellow enthusiasts on the forum. I would consider any other use without my express written consent to be a violation. Essentially, no scraping allowed by anyone for any reason, and that includes for training AI.

I would argue that any AI model that has been trained using data that violates any of the points above should be taken off the market (by force if necessary) and destroyed.


I do not want my data used for AI (or any other) purposes. I do not want to own or use any product that contains AI either at home or at work. (with maybe the limited exception of NPC behavior previously mentioned). I do not want to own or use any product that has been designed, developed or manufactured with the aid of AI or by businesses using AI in analysis or testing/validation. I don't want to have to be exposed to or consume AI generated content.

I think our regulators have been asleep at the switch on this one. A sad outcome of our dysfunctional congress. The best possible outcome for AI would be it being killed in the womb through regulation.

And I am dead serious about all of this.
 
Last edited:
Back
Top