Chat/GPT-4 released

Companion to the Sam Altman on Lex Friedman interview, this is also worth a watch - interview of co-founder and primary programming brains behind OpenAI. Warning, a lot of DeepLearning/MachineLearning terminology thrown around but nothing you can't handle.

The twinkle in Jensen's eye is probably the recognition of Ilya as a big reason Nvidia will likely reach a 1 Trillion market cap.


Thank you for sharing this
 
I have found that math with steps or conversions beyond 3 steps is a problem for ChatGPT. It's just not designed for it.
My paid $10/month GPT4 is much smarter than ChatGPT. Anyone can get a one month subscription and try out GPT4.

Math is not what I use it for though.

But it is well worth the money if you need to tap the best AI chatbot as a “legal assistant” — it passes the legal BAR exam better than 90% of humans, and I use it with my business to get legal agreements that other businesses send me (e.g. copy and pastes of NDA agreements) and GPT4 tells me what my catch or gotchas are. That real lawyers have sometimes missed.

(You can copy and paste a few pages of a a legal document to a paid version of GPT4.)

For less than the price of 10 minutes with a lawyer for matters not quite justifying lawyer (like an NDA for a product sample) you can churn a quick study out of GPT4, and still have hundreds of “appointments” left in the same chatbot for the rest of the month. Bargain of the century.

Still will use a real lawyer if I have to, but at least I can now double check their work quality. And improve legal review of things I wouldn’t normally get legal review on.

It even properly “understand” Blur Busters, unlike free ChatGPT.

The black-icon screenshots are GPT4, and the teal-icon screenshots are GPT3.x
 
Last edited:
My paid $10/month GPT4 is much smarter than ChatGPT. Anyone can get a one month subscription and try out GPT4.

Math is not what I use it for though.

But it is well worth the money if you need to tap the best AI chatbot as a “legal assistant” — it passes the legal BAR exam better than 90% of humans, and I use it with my business to get legal agreements that other businesses send me (e.g. copy and pastes of NDA agreements) and GPT4 tells me what my catch or gotchas are. That real lawyers have sometimes missed.

(You can copy and paste a few pages of a a legal document to a paid version of GPT4.)

For less than the price of 10 minutes with a lawyer for matters not quite justifying lawyer (like an NDA for a product sample) you can churn a quick study out of GPT4, and still have hundreds of “appointments” left in the same chatbot for the rest of the month. Bargain of the century.

Still will use a real lawyer, but at least I can now double check their work quality. And improve legal review of things I wouldn’t normally get legal review on.

It even properly “understand” Blur Busters, unlike free ChatGPT.

The black-icon screenshots are GPT4, and the teal-icon screenshots are GPT3.x
This is an eye opening endorsement for me about gpt4

Hmm 🤔
 
It is definitely is useless for some things, yes.

But if you have a surgical use for it, it is a great small business assistant!

Tool-using skill applies. Ask the RIGHT questions you definitely know it excels at.

“Code” your queries correctly. In the correct subjects.

Not petty math. Sure, it can excellently professor through some disciplines of math (like steps of trigonometry), but can’t calculate a square root of a big rarely seen number. Dismissing it due to that is at your peril, to other wonderful GPT4 skillz.

And I always use the verbose version of GPT4, not Bing. Bing is just a force-terse version of GPT4 and not of use to me. Bing is useful but the main GPT4 is better at legalese..

Indies can even defend or negotiate against an overzealous DMCA takedown request with this version of GPT4, and firewall against certain clearly-unreasonable actions from Big Ones.

Or get a mistaken Facebook ban removed faster, etc.

Or fight off an illegal rent increase.

Indies with limited budget (YouTubers/reviewers/bloggers) should not be without at least access to the best $10 AI legal assistant from now on, it is too costly to hire a lawyer for certain shenanigans of the world. It takes hours of research that GPT4 already “knows”.

Also, “trust-but-verify” is good practice here. None of the free bots come good enough to give you lower risk legal advice than an inexperienced discount paralegal/legal which GPT4 most certainly (on average) outperforms by a wide margin.

Right Tool for Right Job.
 
Last edited:
My paid $10/month GPT4 is much smarter than ChatGPT. Anyone can get a one month subscription and try out GPT4.

Math is not what I use it for though.

But it is well worth the money if you need to tap the best AI chatbot as a “legal assistant” — it passes the legal BAR exam better than 90% of humans, and I use it with my business to get legal agreements that other businesses send me (e.g. copy and pastes of NDA agreements) and GPT4 tells me what my catch or gotchas are. That real lawyers have sometimes missed.

(You can copy and paste a few pages of a a legal document to a paid version of GPT4.)

For less than the price of 10 minutes with a lawyer for matters not quite justifying lawyer (like an NDA for a product sample) you can churn a quick study out of GPT4, and still have hundreds of “appointments” left in the same chatbot for the rest of the month. Bargain of the century.

Still will use a real lawyer if I have to, but at least I can now double check their work quality. And improve legal review of things I wouldn’t normally get legal review on.

It even properly “understand” Blur Busters, unlike free ChatGPT.

The black-icon screenshots are GPT4, and the teal-icon screenshots are GPT3.x

Is that ChatGPT plus for $20 a month, or something else?
 
Is that ChatGPT plus for $20 a month, or something else?
"OpenAssistant Conversations -- Democratizing Large Language Model Alignment: Aligning large language models (LLMs) with human preferences has proven to drastically improve usability and has driven rapid adoption as demonstrated by ChatGPT. Alignment techniques such as supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF) greatly reduce the required skill and domain knowledge to effectively harness the capabilities of LLMs, increasing their accessibility and utility across various domains. However, state-of-the-art alignment techniques like RLHF rely on high-quality human feedback data, which is expensive to create and often remains proprietary. In an effort to democratize research on large-scale alignment, we release OpenAssistant Conversations, a human-generated, human-annotated assistant-style conversation corpus consisting of 161,443 messages distributed across 66,497 conversation trees, in 35 different languages, annotated with 461,292 quality ratings. The corpus is a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers. To demonstrate the OpenAssistant Conversations dataset's effectiveness, we present OpenAssistant, the first fully open-source large-scale instruction-tuned model to be trained on human data. A preference study revealed that OpenAssistant replies are comparably preferred to GPT-3.5-turbo (ChatGPT) with a relative winrate of 48.3% vs. 51.7% respectively. We release our code and data under fully permissive licenses."

https://www.marktechpost.com/2023/0...tion-corpus-including-35-different-languages/
 
The announcement adds to the government’s efforts to beef up its AI capabilities. On Wednesday, U.S. Central Command, which oversees the country’s mission in the Middle East and northern Africa, announced it had hired former Google AI Cloud Director Andrew Moore to serve as its first advisor on AI, robotics, cloud computing and data analytics. CENTCOM said Moore would advise its leaders on applying AI and other technologies to its missions and help with innovation task forces.

https://www.cnbc.com/2023/04/21/dhs-artificial-intelligence-task-force.html
 
version 4 still completely fabricates information if it doesn't know the answer. though it seems to try slightly harder not to than previous versions. with law and non stem topics it's great since it's all bullshit anyways.
 
version 4 still completely fabricates information if it doesn't know the answer. though it seems to try slightly harder not to than previous versions. with law and non stem topics it's great since it's all bullshit anyways.
sad
 
1682359428057.png
 
Patents are going to become a little weird. I imagine it wouldnt be difficult to feed a patented design into some AI models to design a product based on it and changed only enough as necessary to bypass the patent. Could be automated as a patent gets filed it crawls through the info and gets as much as it can to start working.

Kinda just rambling and thinking outloud.
 
  • Like
Reactions: erek
like this
Patents are going to become a little weird. I imagine it wouldnt be difficult to feed a patented design into some AI models to design a product based on it and changed only enough as necessary to bypass the patent. Could be automated as a patent gets filed it crawls through the info and gets as much as it can to start working.

Kinda just rambling and thinking outloud.
Changing a patent only enough to "bypass" it will not be enough to shield the company from lawsuits.

It becomes more of a issue with software licensing which is much more arbitrary then physical patents.
 
  • Like
Reactions: erek
like this
Changing a patent only enough to "bypass" it will not be enough to shield the company from lawsuits.

It becomes more of a issue with software licensing which is much more arbitrary then physical patents.
I mean nothing in this world shields you from a lawsuit, but it could increase the load on the system. I guess im picturing when i said that china companies just fast tracking to producing similar products at a faster rate than now
 
  • Like
Reactions: erek
like this
I mean nothing in this world shields you from a lawsuit, but it could increase the load on the system. I guess im picturing when i said that china companies just fast tracking to producing similar products at a faster rate than now
Those companies would prefer to just source the engineering material behind the product and copy it exactly
 
  • Like
Reactions: erek
like this
Not sure what that has to do with the dozens of forks of the GPT models that are springing up overnight but yes, implementing a system to prevent AI from making up results is probably a good thing.
I mean if you Googled a topic and it didn't find what you were looking for so it just made up a website on the fly with what it thinks you wanted as an answer that would be bad.
 
  • Like
Reactions: erek
like this
Not sure what that has to do with
One possible angle, will make control and guardrails possibly not that useful, bad actor will have access to open source DYU affair quite powerful where it will be impossible to have guardrails.

Is StableDiffusion/StableLM a fork or its own thing and a rival ? both the software side and the dataset: https://pile.eleuther.ai/

That give the feeling that all of this (size, compute power require to run) work an large phone.
 
  • Like
Reactions: erek
like this
One possible angle, will make control and guardrails possibly not that useful, bad actor will have access to open source DYU affair quite powerful where it will be impossible to have guardrails.

Is StableDiffusion/StableLM a fork or its own thing and a rival ? both the software side and the dataset: https://pile.eleuther.ai/

That give the feeling that all of this (size, compute power require to run) work an large phone.
HuggingChat just released
 
Like Reddit and twitter, etc

Hardforums should monetize their data to be trained


“as an AI language model, I was not trained on HardForum or any specific online forum. I was trained on a diverse range of text from the internet, including websites, books, articles, and other sources. My training data includes a wide variety of topics, including computer hardware and technology, so I am able to answer questions related to these subjects. However, I don't have any direct experience with HardForum or any other specific online forum beyond what I have learned from the text I was trained on.”
 
Like Reddit and twitter, etc

Hardforums should monetize their data to be trained


“as an AI language model, I was not trained on HardForum or any specific online forum. I was trained on a diverse range of text from the internet, including websites, books, articles, and other sources. My training data includes a wide variety of topics, including computer hardware and technology, so I am able to answer questions related to these subjects. However, I don't have any direct experience with HardForum or any other specific online forum beyond what I have learned from the text I was trained on.”
For the sake of all mankind, someone buy the bot a genmay sub
 
  • Like
Reactions: erek
like this
Like Reddit and twitter, etc

Hardforums should monetize their data to be trained


“as an AI language model, I was not trained on HardForum or any specific online forum. I was trained on a diverse range of text from the internet, including websites, books, articles, and other sources. My training data includes a wide variety of topics, including computer hardware and technology, so I am able to answer questions related to these subjects. However, I don't have any direct experience with HardForum or any other specific online forum beyond what I have learned from the text I was trained on.”
Wait, I thought you were the AI bot ofr this forums! :p
 
Back
Top