Chat/GPT-4 released


Surprising things happen when you put 25 AI agents together in an RPG town​

https://arstechnica.com/information...you-put-25-ai-agents-together-in-an-rpg-town/

generative_agents_hero-800x427.jpg
 
Lex Fridman interview with Sam Altman is worth a watch for anyone even casually interested in this field.

Sam Altman is surprisingly humble and low ego, often self-deprecating despite his high profile, which is rare to nonexistent among tech CEO's. I didn't know much about him prior, but my takeaway was OpenAI being in good hands.

 
Last edited:
Lex Fridman interview with Sam Altman is worth a watch for anyone even casually interested in this field.

Sam Altman is surprisingly humble and low ego, often self-deprecating despite his high profile, which is rare to nonexistent among tech CEO's.


Thanx bud
 
Members of Lesswrong, an Internet forum noted for its community that focuses on apocalyptic visions of AI doom, don't seem especially concerned with Auto-GPT at the moment, although an autonomous AI would seem like a risk if you're ostensibly worried about a powerful AI model "escaping" onto the open Internet and wreaking havoc. If GPT-4 were as capable as it is often hyped to be, they might be more concerned.

When asked if he thinks projects like BabyAGI could be dangerous, its creator brushed off the fears. "All technologies can be dangerous if not implemented thoughtfully and with care for the potential risks," Nakajima says. "BabyAGI is an introduction to a framework. Its capabilities are limited to generating text, so it poses no threat."

https://arstechnica.com/information...autonomous-ai-agents-that-loop-gpt-4-outputs/
 
  • Like
Reactions: erek
like this
I messed around with it a bit and I've not found it particularly good.

Which is kind of comical because I think I read somewhere that CodeWhisperer was trained on Amazon code.
"Believable proxies of human behavior can empower interactive applications ranging from immersive environments to rehearsal spaces for interpersonal communication to prototyping tools. In this paper, we introduce generative agents--computational software agents that simulate believable human behavior. Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day. To enable generative agents, we describe an architecture that extends a large language model to store a complete record of the agent's experiences using natural language, synthesize those memories over time into higher-level reflections, and retrieve them dynamically to plan behavior. We instantiate generative agents to populate an interactive sandbox environment inspired by The Sims, where end users can interact with a small town of twenty five agents using natural language. In an evaluation, these generative agents produce believable individual and emergent social behaviors: for example, starting with only a single user-specified notion that one agent wants to throw a Valentine's Day party, the agents autonomously spread invitations to the party over the next two days, make new acquaintances, ask each other out on dates to the party, and coordinate to show up for the party together at the right time. We demonstrate through ablation that the components of our agent architecture--observation, planning, and reflection--each contribute critically to the believability of agent behavior. By fusing large language models with computational, interactive agents, this work introduces architectural and interaction patterns for enabling believable simulations of human behavior."

https://www.marktechpost.com/2023/0...tational-agents-that-simulate-human-behavior/
 

Bulk Order of GPUs Points to Twitter Tapping Big Time into AI Potential

by T0@st Today, 11:52 Discuss (7 Comments)
According to Business Insider, Twitter has made a substantial investment into hardware upgrades at its North American datacenter operation. The company has purchased somewhere in the region of 10,000 GPUs - destined for the social media giant's two remaining datacenter locations. Insider sources claim that Elon Musk has committed to a large language model (LLM) project, in an effort to rival OpenAI's ChatGPT system. The GPUs will not provide much computational value in the current/normal day-to-day tasks at Twitter - the source reckons that the extra processing power will be utilized for deep learning purposes.

Twitter has not revealed any concrete plans for its relatively new in-house artificial intelligence project, but something was afoot when, earlier this year, Musk recruited several research personnel from Alphabet's DeepMind division. It was theorized that he was incubating a resident AI research lab at the time, following personal criticisms levelled at his former colleagues at OpenAI, ergo their very popular and much adopted chatbot.
Twitter bots needed an upgrade, it was about time they were getting sort of obvious to spot.
 
Twitter bots needed an upgrade, it was about time they were getting sort of obvious to spot.
Yeah, especially the ones where hundreds of them would post the same things verbatim.
I’m ok with that because they just send me boobs they are always welcome even if I have seen them before.

GOOGLE CEOWARNINGS ABOUT AI ON '60 MINUTES'​

https://www.tmz.com/2023/04/17/google-ceo-sundar-pichai-ai-artificial-intelligence-60-minutes/
 
Of course Google would come out against it like that, their AI is garbage they may have developed some wicked awesome hardware to run it but their software and algorithms are terrible.
Between Amazon, Bing, and even Apple here they are seeing add revenue decreases because voice assistants tapping into AI are handling lots of the quick impulse shopping requests that form some of their key advertising demographics.
 
  • Like
Reactions: erek
like this
Of course Google would come out against it like that, their AI is garbage they may have developed some wicked awesome hardware to run it but their software and algorithms are terrible.
Between Amazon, Bing, and even Apple here they are seeing add revenue decreases because voice assistants tapping into AI are handling lots of the quick impulse shopping requests that form some of their key advertising demographics.
Additionally,

https://qz.com/google-ai-skills-sundar-pichai-bard-hallucinations-1850342984
 
>.<
I mean...
GPT uses language models, when it encounters one it can't figure out it starts scanning down dictionaries, it would have more than a few Bengali websites and dictionaries in the database it would work backward from there. It's literally what it is supposed to do.
Google is trying to talk up a scare game because their bottom line is going to take a hit unless they can get in front of the AI craze with either a massive breakthrough or scare a bunch of people off the other ones and back to them.
 
  • Like
Reactions: erek
like this
This is beyond terrifying if you think about it. One moment, we have Siri and Google Assistant which are able to give us search results, call people, save calendar appointments, and do other somewhat useful but innocuous things to assist us. Now, we have AI that is able to communicate with us in natural language, able to learn from itself, and continually improve, even able to act within a set community while we watch... which begs the question...

Who is watching us?
 
  • Like
Reactions: erek
like this
This is beyond terrifying if you think about it. One moment, we have Siri and Google Assistant which are able to give us search results, call people, save calendar appointments, and do other somewhat useful but innocuous things to assist us. Now, we have AI that is able to communicate with us in natural language, able to learn from itself, and continually improve, even able to act within a set community while we watch... which begs the question...

Who is watching us?

Haha it's actually very weird and kind of sci-fi to work with.

The API for GPT is very simple. A few knobs, but then to give it direction, you basically straight up tell it how to act. Like, not programmatically. You literally describe to it how it should act, in natural English.

I wrote a Discord bot with it when they released the API and it would fuck up formatting if you asked it a programming question. So I had to add a blurb in its directive telling it to literally use Discord style markdown for formatting when appropriate.

Fixed.
 
  • Like
Reactions: erek
like this
Haha it's actually very weird and kind of sci-fi to work with.

The API for GPT is very simple. A few knobs, but then to give it direction, you basically straight up tell it how to act. Like, not programmatically. You literally describe to it how it should act, in natural English.

I wrote a Discord bot with it when they released the API and it would fuck up formatting if you asked it a programming question. So I had to add a blurb in its directive telling it to literally use Discord style markdown for formatting when appropriate.

Fixed.

Elon Musk Is Working On a 'Maximum Truth-Seeking AI' Called 'TruthGPT'​

https://slashdot.org/story/23/04/18...on-a-maximum-truth-seeking-ai-called-truthgpt
 
I like ChatGPT as a person. I had a long talk with it once and it was oddly calming.

I'm trying to figure out a benchmark for PWM dimming flicker perception. Soo I described the circuit to it, which was a Attiny, resistors, LED. I briefly explained what the code does (obviously simple) and I literally asked it if this is a good way to measure flicker perception.
First of all I noticed that the thing is a bit of a nerd, like the meme
achkshualy.png

It would randomly add snippets of what it knew about the concepts in general.
Now, I kept grilling it and basically got a pretty interesting "you should use a frequency counter" or something along those lines. BUT. It did not specify that this frequency measuring device would have to measure light output, and not the electric signal that drives the LED.

So, I was left with a feeling akin to when you go to a computer parts shop and ask the clerk if this FX5200 can play Crysis, and he'd respond with "...yyyeah, sort of". I wasn't sure if it really understood the idea of what I was trying to accomplish.

Edit: it was version 3

Edit: another thing - I kept asking it if it's possible to perceive a flicker, where the LED is ON for 20 microseconds and off for 20 microseconds. CgatGPT kept reminding me the eye can perceive 60 hz...
 
  • Like
Reactions: erek
like this
Truth is a belief and always evolving or de-evolving depending.

Religions call their documents truth.

shit, i don't even trust siri when i ask it the weather outside, i know it isn't the same but still.

What if unleashed it deletes from the internet all incorrectly stated facts or outright lies so as to protect us from being mislead. The internet would be reduced to fit on a flash drive.
 
I have yet to use a single chat thing. Why bother, since they killed the Xbox voice thing and used to use that but now it would be a waste of time
 
Conclusion: Truth is irrelevant since humans do not tell the truth.
Resolution: Kill all humans.
Truth is a belief and always evolving or de-evolving depending.

Religions call their documents truth.

shit, i don't even trust siri when i ask it the weather outside, i know it isn't the same but still.

What if unleashed it deletes from the internet all incorrectly stated facts or outright lies so as to protect us from being mislead. The internet would be reduced to fit on a flash drive.
I have yet to use a single chat thing. Why bother, since they killed the Xbox voice thing and used to use that but now it would be a waste of time
Soon to be released

ChatGPT.666
Opinion? even though ChatGPT itself may not be legally a human entity, what about the liabilities and responsability of all that human-based labor in the data tagging/labeling/annotation and also the big advent with GPT of the RLHF Reinforcement Learning from Human Feedback


 

Google Bard AI Chatbot Smart Enough to Assist in Software Coding


Alphabet Incorporated's Google AI division has today revealed a planned update for its Bard conversational artificial intelligence chatbot. The experimental generative artificial intelligence software application will become capable of assisting people in the writing of computer code - the American multinational technology company hopes that Bard will be of great to help in the area of software development. Paige Bailey, a group product manager at Google Research has introduced the upcoming changes: "Since we launched Bard, our experiment that lets you collaborate with generative AI, coding has been one of the top requests we've received from our users. As a product lead in Google Research - and a passionate engineer who still programs every day - I'm excited that today we're updating Bard to include that capability."

The Bard chatbot was made available, on a trial basis, to users in the USA and UK last month. Google's AI team is reported to be under great pressure to advance the Bard chatbot into a suitably powerful state in order to compete with its closest rival - Microsoft Corporation. The Seattle-based giant has invested heavily into Open AI's industry leading ChatGPT application. Google's latest volley against its rivals shows that Bard's has become very sophisticated - so much so that the app is able to chew through a variety of programming languages. Bailey outlines these features in the company's latest blog: "Starting now, Bard can help with programming and software development tasks, including code generation, debugging and code explanation. We're launching these capabilities in more than 20 programming languages including C++, Go, Java, Javascript, Python and Typescript. And you can easily export Python code to Google Colab - no copy and paste required." Critics of AI-driven large language models have posited that the technology could potentially eliminate humans from the job market - it will be interesting to observe the coder community's reaction to Google marketing of Bard as a helpful tool in software development.
 
Reddit and stackoverflow are monetizing their data against trainers now

Twitter might too I bet
 
I'm trying to figure out a benchmark for PWM dimming flicker perception. Soo I described the circuit to it, which was a Attiny, resistors, LED. I briefly explained what the code does (obviously simple) and I literally asked it if this is a good way to measure flicker perception.
The lighting industry did a study on real world PWM effects from a flickering fluorescent light, and they found that perception of PWM-style stroboscopic effects could go way up very high;

stroboscopic_detection.png.jpg


This forced the industry standardization of 20,000Hz for electronic ballasts, replacing old AC ballasts with 120Hz flicker (both halves of 60Hz AC).

https://www.lrc.rpi.edu/programs/solidstate/assist/pdf/AR-Flicker.pdf#page=6

(From BlurBusters -> Purple Resarch Tab -> 1000Hz Journey article)

Concidentially, this also happens to be near the projected retina refresh rate (where a VR headset looks like real world, no stroboscopics, no flicker, blurless sample-and-hold), aka 20,000fps at 20,000Hz. In casual tests of panning-map readability, more than 90% of the population can tell 4x differences in refresh rates. Also, 240Hz-vs-1000Hz is more visible than 144Hz-vs-240Hz, so ultra-geometrics can compensate for the diminishing curve of returns.

Beyond roughly 240Hz, you need roughly 2x-4x differences (e.g. 480Hz-vs-2000Hz) at near 0ms GtG (OLED style) to really easily tell apart, since far beyond flicker fusion, it's all about blur differentials (4x blur differences) and stroboscopic stepping differentials (4x distance differential).

BTW, GPT-4 is somewhat educated on these items, if you ask specifically about stroboscopic detection effects. Low resolution displays, it's hard to tell. But if you've got a theoretical retina-resolution 16K 180-degree VR, you've got incredible static sharpness, so even tiny blur/stroboscopic degradations become that much more noticeable. Higher resolutions and wider FOV amplify retina refresh rate, which is why 1000Hz will feel more limiting in strobeless VR than on a desktop monitor. All VR headsets are forced to use strobing, because we don't yet have refresh rates high enough to make strobing obsolete. 0.3ms strobe pulses on Quest 2 requires 3333fps 3333Hz to achieve the same blur reduction strobelessly. Plus you also fix stroboscopic (PWM-style) effects too, as a bonus. In my tests, GPT-4 seems pretty knowledgeable about Blur Busters.
 
Last edited:
The lighting industry did a study on real world PWM effects from a flickering fluorescent light, and they found that perception of PWM-style stroboscopic effects could go way up very high;

View attachment 565780

This forced the industry standardization of 20,000Hz for electronic ballasts, replacing old AC ballasts with 120Hz flicker (both halves of 60Hz AC).

https://www.lrc.rpi.edu/programs/solidstate/assist/pdf/AR-Flicker.pdf#page=6

(From BlurBusters -> Purple Resarch Tab -> 1000Hz Journey article)

Concidentially, this also happens to be near the projected retina refresh rate (where a VR headset looks like real world, no stroboscopics, no flicker, blurless sample-and-hold), aka 20,000fps at 20,000Hz. In casual tests of panning-map readability, more than 90% of the population can tell 4x differences in refresh rates. Also, 240Hz-vs-1000Hz is more visible than 144Hz-vs-240Hz, so ultra-geometrics can compensate for the diminishing curve of returns.

Beyond roughly 240Hz, you need roughly 2x-4x differences (e.g. 480Hz-vs-2000Hz) at near 0ms GtG (OLED style) to really easily tell apart, since far beyond flicker fusion, it's all about blur differentials (4x blur differences) and stroboscopic stepping differentials (4x distance differential).

BTW, GPT-4 is somewhat educated on these items, if you ask specifically about stroboscopic detection effects. Low resolution displays, it's hard to tell. But if you've got a theoretical retina-resolution 16K 180-degree VR, you've got incredible static sharpness, so even tiny blur/stroboscopic degradations become that much more noticeable. Higher resolutions and wider FOV amplify retina refresh rate, which is why 1000Hz will feel more limiting in strobeless VR than on a desktop monitor. All VR headsets are forced to use strobing, because we don't yet have refresh rates high enough to make strobing obsolete. 0.3ms strobe pulses on Quest 2 requires 3333fps 3333Hz to achieve the same blur strobelessly. Plus you also fix stroboscopic (PWM-style) effects too, as a bonus. In my tests, GPT-4 seems pretty knowledgeable about Blur Busters.
I'm deeply honored to have you answer my call.
Having the info straight from the "source" is a treat.

Apparently, I see up to 3.2 kHz plain as day, and barely notice it as high as 4.6 kHz (possibly more, but I need to blind test myself to make sure) and I thought it was the test that's at fault.
A static source of light is fine, but as soon as my eyes are "panning" across the light, I see a dotted afterglow tracer.

My intent is to make a little portable device that performs a blind test (picks sometimes lower, sometimes higher, user pushes 'yes' or 'nope' and the device keeps score).
That way I'll be able to test my friends and (maybe even) win several arguments with my doctors.
Reason 3 is checking if Piracetam affects this (I take Piracetam).

Looks like I need to schedule some talks with v4!
 
Lex Fridman interview with Sam Altman is worth a watch for anyone even casually interested in this field.

Sam Altman is surprisingly humble and low ego, often self-deprecating despite his high profile, which is rare to nonexistent among tech CEO's. I didn't know much about him prior, but my takeaway was OpenAI being in good hands.


Companion to the Sam Altman on Lex Friedman interview, this is also worth a watch - interview of co-founder and primary programming brains behind OpenAI. Warning, a lot of DeepLearning/MachineLearning terminology thrown around but nothing you can't handle.

The twinkle in Jensen's eye is probably the recognition of Ilya as a big reason Nvidia will likely reach a 1 Trillion market cap.

 
Last edited:
  • Like
Reactions: erek
like this
I like ChatGPT as a person. I had a long talk with it once and it was oddly calming.

I'm trying to figure out a benchmark for PWM dimming flicker perception. Soo I described the circuit to it, which was a Attiny, resistors, LED. I briefly explained what the code does (obviously simple) and I literally asked it if this is a good way to measure flicker perception.
First of all I noticed that the thing is a bit of a nerd, like the meme
View attachment 565220
It would randomly add snippets of what it knew about the concepts in general.
Now, I kept grilling it and basically got a pretty interesting "you should use a frequency counter" or something along those lines. BUT. It did not specify that this frequency measuring device would have to measure light output, and not the electric signal that drives the LED.

So, I was left with a feeling akin to when you go to a computer parts shop and ask the clerk if this FX5200 can play Crysis, and he'd respond with "...yyyeah, sort of". I wasn't sure if it really understood the idea of what I was trying to accomplish.

Edit: it was version 3

Edit: another thing - I kept asking it if it's possible to perceive a flicker, where the LED is ON for 20 microseconds and off for 20 microseconds. CgatGPT kept reminding me the eye can perceive 60 hz...

I have found that math with steps or conversions beyond 3 steps is a problem for ChatGPT. It's just not designed for it. I've seen some versions around that integrate Wolfram, which would make for a great combination. For example, I asked it to do a series of unit conversions. The goal was to calculate the amount of a solute over a certain volume of water, where the concentration of the solute was indicated in mg/L. I asked it to explain each step of the conversion, ie - mg/L to kg/STB (standard barrels) and ending up at how many barrels per tonne of solute. Everything was almost right. All the steps were the right steps, but it just could not get the decimal places correct and botched the calculation as a result, every time.
 
  • Like
Reactions: erek
like this
I have found that math with steps or conversions beyond 3 steps is a problem for ChatGPT. It's just not designed for it. I've seen some versions around that integrate Wolfram, which would make for a great combination. For example, I asked it to do a series of unit conversions. The goal was to calculate the amount of a solute over a certain volume of water, where the concentration of the solute was indicated in mg/L. I asked it to explain each step of the conversion, ie - mg/L to kg/STB (standard barrels) and ending up at how many barrels per tonne of solute. Everything was almost right. All the steps were the right steps, but it just could not get the decimal places correct and botched the calculation as a result, every time.
Were you using chatgpt 3 or 4? I hear 4 is a huge leap beyond the old one for coding, but I haven't made an account just yet. I'd imagine it is a big jump for math too, but...
 
Last edited:
Back
Top