Apple 'Scary Fast' Mac event October 30th

It is, but they’ve expanded Apple Care to cover just about anything from natural failure to “Oh Shit! I just dumped my coffee all over it!”.
There's a story today on WCCFTech--so, you know, the usual caveats--that the USB-C ports have moisture sensors in them, with a reporting daemon, and, it being that site, a suggestion that tripping the sensor would cause Apple to deny warranty claims.

https://wccftech.com/apple-macs-usb-c-ports-can-now-detect-liquid/
 
While we're here are there any windows laptops with 400 GBs access to 128 GB of system memory?
 
While we're here are there any windows laptops with 400 GBs access to 128 GB of system memory?
If you have the type of workload that could actually use the insane amount of GPU memory that these things offer - They pretty much stand on their own, and it's impressive given the laptop form factor.
 
I dunno about that claim.

But it is the case that the Apple repair sites use moisture meters to deny warranty claims.
No Apple has accepted warranty claims that had Apple Care attached where I have straight told them that the tech agent that the operator spilled a cup of coffee on it.
If it has moisture in there and you plug something in it will give you a warning message and not put power through it.
The iPhones have them too.
 
No Apple has accepted warranty claims that had Apple Care attached where I have straight told them that the tech agent that the operator spilled a cup of coffee on it.
If it has moisture in there and you plug something in it will give you a warning message and not put power through it.
The iPhones have them too.

But I had an Applecare appointment just last week where they advised me again that I have to pay for myself if they find moisture.
 
Geekbench is also showing the M3 GPU as only 7% slower than a 4080, which I find highly suspect. I would take any scores posted there with a whole ass box of salt.
Geekbench isn't good at comparing Apple products to other products. It's fine for comparing M3's to M2's and M1's.

Comparing benchmarks is meaningless on apple computers.
So you're saying that Apple is immune to benchmarks?
Benchmarks are meaningless as Apple's software and hardware cohesiveness tends to cause real world performance not necessarily coorelate with those benchmarks.
No yea, you think benchmarks don't matter. Not even sure if you're serious or joking now.
What's the point of comparing benchmarks scores when they can never be apples to apples comparisons (pun intended)? That's the point I'm making.
You can always compare Apples to Apples. You're excusing Apple for their performance.
New apple laptops can perform better in certain use cases, but can it run crysis? :ROFLMAO:. At the same time you aren't going to run some industry standard software like Final Cut Pro on windows...because you can't.
I wouldn't call Final Cut Pro an industry standard. DaVinci Resolve is more industry standard.
 
The only impressive chip from the new M3s is the Max. It's quite a huge leap in performance in a laptop. It goes head to head with the M2 Ultra, which is basically two M2 Max chips. So ... that's kind of insane. Everything else is just a bit meh. The base 14" ... which can still only connect to one display ... is stupid.

I wouldn't call Final Cut Pro an industry standard. DaVinci Resolve is more industry standard.
Everyone is definitely jumping to Resolve these days.
 
The only thing scary about Apple's M3's is how they took a page from Nvidia and charge $1,800 for 8GB of ram. Also it seems the M3's are more downgrade than upgrade. Less cores and less memory bandwidth than the M2's?
Apple has always bent people over backwards for RAM upgrades...heck, Dell does it in their server line charging $2k + for a 32GB stick you can get from anywhere else for a couple hundred...same model.
 
Last edited:
Apple has always bent people over backwards for RAM upgrades...heck, Dell does it in their server line charging $2k + for a 32GB stick you can get from anywhere else for a couple hundred...same model.
Yeah, when we rebuilt the cluster at my current job, we ordered the servers with the bare minimum RAM Dell would sell and then bought it from Crucial or Amazon, can't remember. Saved several thousand dollars on the final project cost. And that was buying a spare kit as well.
 
I am very not impressed.
Apple is using much better node for its chips than competition and does not even come close for the price to what x86 hardware world does. Also using parts of GPU, special memory to actually beat old 7950X not even 3D chip.
I'm waiting to laugh at their expensive workstations Mac Pro/Studio based on future M3 Ultra. Same price as the ultimate Threadripper with a bunch of 4090 performing as a Ryzen with a middle range GPU. Will use again some sort of biased benchmark to say it's the best... Apple fans will applaud.

Intel used something close to 7nm TSMC for his latter CPU chips, AMD uses a mixture of 5 and 6nm TSMC for his CPU chips. Nvidia uses 4nm TSMC for his latter GPU. AMD uses a mixture of 5 and 6 nm for his latter GPU. Apple uses now 4nm 3nm TSMC. And they pay to be the only one to use it before TSMC gets one step better. So no, this is not impressive at all.

Since Apple got out of using the PC chips, they're going the same old Motorola chip path, first the 68000, then the PowerPC. The PC system is not going to disappear and it will remain the standard of home, business and workstation computers. Because they are cheap for the benefits, versatile and can be built from scratch by anybody. There is no way Apple can counter the PC performance for the price. Same Apple hardware with PC components would be less expensive, bring more money to Apple have performance on par with the PCs, even work as dual boot on standard Linux or Windows. Apple just lost that ability and value.
 
Last edited:
Right, but that's a really convoluted way to explain the obvious.

I'm a big proponent of buying the machine that makes the most sense for the proverbial "your" use case. However for a vast majority of users, I don't think there are specific programs that are necessary to do whatever job. And I would say that software that is that level of specialized that similar can't be done on the other side is also vanishingly small. It's honestly getting to the point where it's just preference for specific workflow far more than it is about whether a particular machine is capable. Other than actual processing power (circling back around to that again).

Yes.

Again, yes. Though I would say that full time Final Cut editors are in a vast minority. It's still all Premiere, Resolve, and Avid. For some reality TV stuff I have heard of story writers using Vegas.
I feel like we're saying the same stuff but debating over semantics.
 
Intel used something close to 7nm TSMC for his latter CPU chips, AMD uses a mixture of 5 and 6nm TSMC for his CPU chips. Nvidia uses 4nm TSMC for his latter GPU. AMD uses a mixture of 5 and 6 nm for his latter GPU. Apple uses now 4nm 3nm TSMC. And they pay to be the only one to use it before TSMC gets one step better. So no, this is not impressive at all.
To be fair, AMD is using 4nm for their mobile 7000 chips.
Since Apple got out of using the PC chips, they're going the same old Motorola chip path, first the 68000, then the PowerPC. The PC system is not going to disappear and it will remain the standard of home, business and workstation computers. Because they are cheap for the benefits, versatile and can be built from scratch by anybody. There is no way Apple can counter the PC performance for the price. Same Apple hardware with PC components would be less expensive, bring more money to Apple have performance on par with the PCs, even work as dual boot on standard Linux or Windows. Apple just lost that ability and value.
The cost of going 3nm is probably why Apple is cutting cores and bandwidth in their M3 chips. I also wouldn't be shocked if the 3nm TSMC process is full of defects and the M3 Pro chips are probably cut down due to this. Apple seems to have made real performance improvements, but this allows them to cut down cores and bandwidth while also still being technically faster. The thing that's really telling is the M3 Max, where it seems Apple put a lot more effort into as they actually added more cores instead of taking them away. Another situation where Apple is telling it's consumers to buy the more expensive part or else. The M3 Max is a monolithic die, meaning that Apple is really up the nose in cost for this chip to be made. Unlike AMD who is saving money in manufacturing by using their chiplet design, as well as Intel who's Meteor Lake will have the CPU and GPU manufactured separately in what they call tiling. With AMD's chiplet and Intel's tiling, they are also increasing performance, because they can bin better. When Apple makes their M3 Max chips, they will have some defects on them and that will require them to either reduce clocks or cut off parts that maybe defective like cores or pipelines for bandwidth. What Apple supporters don't understand is that the reason Apple makes monolithic chips is not because it's better, but because they don't have the engineering capability. Just like how unified memory is not better but worse, and you can't upgrade the ram. Unified memory is always done to cut costs, and not increase performance. This is why the Xbox's and Playstation's have been doing it for over a decade. Also, the Apple fans need to put down the Geekbench crack pipe. They should stop with all the synthetic benchmarks while they're at it. We've seen comparisons with Geekbench that make it seem like the Apple Silicon is so much faster, but when tested on an actual application, it's so much slower. Good chance when reviewers actually run real world applications against the M2 and M1, they will find that the performance Apple was promising with the M3 will be very disappointing.
 
To be fair, AMD is using 4nm for their mobile 7000 chips.

The cost of going 3nm is probably why Apple is cutting cores and bandwidth in their M3 chips. I also wouldn't be shocked if the 3nm TSMC process is full of defects and the M3 Pro chips are probably cut down due to this. Apple seems to have made real performance improvements, but this allows them to cut down cores and bandwidth while also still being technically faster. The thing that's really telling is the M3 Max, where it seems Apple put a lot more effort into as they actually added more cores instead of taking them away. Another situation where Apple is telling its consumers to buy the more expensive part or else. The M3 Max is a monolithic die, meaning that Apple is really up the nose in cost for this chip to be made. Unlike AMD who is saving money in manufacturing by using their chiplet design, as well as Intel who's Meteor Lake will have the CPU and GPU manufactured separately in what they call tiling. With AMD's chiplet and Intel's tiling, they are also increasing performance, because they can bin better. When Apple makes their M3 Max chips, they will have some defects on them and that will require them to either reduce clocks or cut off parts that maybe defective like cores or pipelines for bandwidth. What Apple supporters don't understand is that the reason Apple makes monolithic chips is not because it's better, but because they don't have the engineering capability. Just like how unified memory is not better but worse, and you can't upgrade the ram. Unified memory is always done to cut costs, and not increase performance. This is why the Xbox's and Playstation's have been doing it for over a decade. Also, the Apple fans need to put down the Geekbench crack pipe. They should stop with all the synthetic benchmarks while they're at it. We've seen comparisons with Geekbench that make it seem like the Apple Silicon is so much faster, but when tested on an actual application, it's so much slower. Good chance when reviewers actually run real world applications against the M2 and M1, they will find that the performance Apple was promising with the M3 will be very disappointing.
Apple paid just north of $1B USD for the TSMC 3nm tapeout.

https://www.extremetech.com/computing/apple-spent-1-billion-on-the-m3-tape-out-says-analyst

TSMC 3N has a defect rate in the 35-40% range. Apple has a chip-buy contract with TSMC and not a Wafer-Buy like AMD, Nvidia, and just about everybody else uses. So Apple only pays TSMC for the chips that pass their binning specifications but with the defect rate and node delays I’m sure Apple has made a design change or 2 with the 3N products.

Quick FYI on chiplets, the cost savings isn’t that big in the traditional sense as the interposer itself it not cheap and the extra packaging isn’t free. But if you assume X errors per square cm (instead of percentage based yield rates), having lots of small chips instead of 1/3’rd as many big ones tends to ensure that the errors contained in any one part are kept to a minimum so it’s results in better binned yields which means that there are fewer bits they are selling for less than intended. So it keeps SKU hell at bay which lets them get better margins on a wafer as more is sold for top dollar but it’s not necessarily cheaper in the sense most attribute it to be.
But as AMD is very silicon starved ensuring the most from what silicon they do get is super important. AMD wears a lot of hats and can’t afford to be putting silicon in the burn pile.
 
But I had an Applecare appointment just last week where they advised me again that I have to pay for myself if they find moisture.
Apparently for enterprise they only sell Apple Care Plus which covers accidental damage from spills and drops. Apple Care itself doesn’t cover that, so now I know.
 
To be fair, AMD is using 4nm for their mobile 7000 chips.

The cost of going 3nm is probably why Apple is cutting cores and bandwidth in their M3 chips. I also wouldn't be shocked if the 3nm TSMC process is full of defects and the M3 Pro chips are probably cut down due to this. Apple seems to have made real performance improvements, but this allows them to cut down cores and bandwidth while also still being technically faster. The thing that's really telling is the M3 Max, where it seems Apple put a lot more effort into as they actually added more cores instead of taking them away. Another situation where Apple is telling it's consumers to buy the more expensive part or else. The M3 Max is a monolithic die, meaning that Apple is really up the nose in cost for this chip to be made. Unlike AMD who is saving money in manufacturing by using their chiplet design, as well as Intel who's Meteor Lake will have the CPU and GPU manufactured separately in what they call tiling. With AMD's chiplet and Intel's tiling, they are also increasing performance, because they can bin better. When Apple makes their M3 Max chips, they will have some defects on them and that will require them to either reduce clocks or cut off parts that maybe defective like cores or pipelines for bandwidth. What Apple supporters don't understand is that the reason Apple makes monolithic chips is not because it's better, but because they don't have the engineering capability. Just like how unified memory is not better but worse, and you can't upgrade the ram. Unified memory is always done to cut costs, and not increase performance. This is why the Xbox's and Playstation's have been doing it for over a decade. Also, the Apple fans need to put down the Geekbench crack pipe. They should stop with all the synthetic benchmarks while they're at it. We've seen comparisons with Geekbench that make it seem like the Apple Silicon is so much faster, but when tested on an actual application, it's so much slower. Good chance when reviewers actually run real world applications against the M2 and M1, they will find that the performance Apple was promising with the M3 will be very disappointing.
So far, most of the real-world tests I've seen (Blender, video editing, games, a few others) show significant speed improvements, particularly for anything GPU- or video-heavy. Not enough to make someone ditch their M2 Pro/Max system, and it's not always an easy call if you have an M1 Pro/Max, either. But few people replace computers every year, and there are pros for whom Apple's performance advantages are particularly appealing.

AMD- and Intel-based laptops can certainly be faster in some tasks, and despite Apple's rhetoric Macs have a long way to go in gaming. But here's the big challenge: try unplugging that x86 laptop and see how it stacks up then. There are still a few areas where Apple wants you to plug in (games, most notably), but it has a distinct edge for content creation on battery. Important if you're on a photo/video shoot or otherwise can't always afford to be tied to a wall outlet.
 
So far, most of the real-world tests I've seen (Blender, video editing, games, a few others) show significant speed improvements, particularly for anything GPU- or video-heavy.
I would like to see these reviews. Sadly, I'm waiting for Linus Tech Tips to do the M3, even though their reviews aren't good, but still better than most Apple only reviewers.
Not enough to make someone ditch their M2 Pro/Max system, and it's not always an easy call if you have an M1 Pro/Max, either. But few people replace computers every year, and there are pros for whom Apple's performance advantages are particularly appealing.
Doesn't sound very significant then. Apple's problem is their products aren't going beyond existing customers. They know an M2 owner won't upgrade, but they want M1 owners to upgrade and they want Windows laptop users to switch.
AMD- and Intel-based laptops can certainly be faster in some tasks, and despite Apple's rhetoric Macs have a long way to go in gaming. But here's the big challenge: try unplugging that x86 laptop and see how it stacks up then. There are still a few areas where Apple wants you to plug in (games, most notably), but it has a distinct edge for content creation on battery. Important if you're on a photo/video shoot or otherwise can't always afford to be tied to a wall outlet.
It's hard to find tests done on AMD's mobile 7000 series unplugged but from what I've seen it lasts just as long as Apples. Gotta remember the Steam Deck is x86 and lasts up to 2 hours on battery while playing games and won't slow down when unplugged, unlike the Switch.
https://www.notebookcheck.net/AMD-R...s-ideally-as-efficient-as-Apple.713395.0.html
 
I would like to see these reviews. Sadly, I'm waiting for Linus Tech Tips to do the M3, even though their reviews aren't good, but still better than most Apple only reviewers.

Doesn't sound very significant then. Apple's problem is their products aren't going beyond existing customers. They know an M2 owner won't upgrade, but they want M1 owners to upgrade and they want Windows laptop users to switch.

It's hard to find tests done on AMD's mobile 7000 series unplugged but from what I've seen it lasts just as long as Apples. Gotta remember the Steam Deck is x86 and lasts up to 2 hours on battery while playing games and won't slow down when unplugged, unlike the Switch.
https://www.notebookcheck.net/AMD-R...s-ideally-as-efficient-as-Apple.713395.0.html

That's why the high(er) end ones are in black. Apple is betting on black. After all, there's no going back! :-P

EDIT: There's one thing I wish they would add! A cellular radio card! I'm typing this on my Thinkpad X1 via TMO 5G. That's something I cannot do on my Macbook Pro without being hitched to a hotspot or my iPhone.
 
There's one thing I wish they would add! A cellular radio card! I'm typing this on my Thinkpad X1 via TMO 5G. That's something I cannot do on my Macbook Pro without being hitched to a hotspot or my iPhone.

AMEN!

They're beautifully made, the battery life is great, but fiddling around with tethering (quite often) gets on your nerves by the end of the day when you've been to three or four job sites.
 
That's why the high(er) end ones are in black. Apple is betting on black. After all, there's no going back! :-P

EDIT: There's one thing I wish they would add! A cellular radio card! I'm typing this on my Thinkpad X1 via TMO 5G. That's something I cannot do on my Macbook Pro without being hitched to a hotspot or my iPhone.
You can do that, that's probably good enough for Apple.
There were prototypes of the pre-unibody Macbook Pro's that had cellular. But it seems to me the issue is they don't think that there is enough of a market segment to warrant the R&D as well as increased manufacturing costs to go after such a tiny portion of the market that can simply tether. It basically would require an entirely new production line just for that significantly more complex SKU, and I doubt more than 5% of MBP buyers would select it as an option. So unless it's going to be a $1000 option (or some other ridiculous number that most people wouldn't want to fork out the money for), the ROI just isn't there.

I think this typified by how few PC laptops also exist with this option. There are some, yes, but it's not ubiquitous. You cannot simply select any laptop model and get it with a cellular option.
 
There were prototypes of the pre-unibody Macbook Pro's that had cellular. But it seems to me the issue is they don't think that there is enough of a market segment to warrant the R&D as well as increased manufacturing costs to go after such a tiny portion of the market that can simply tether. It basically would require an entirely new production line just for that significantly more complex SKU, and I doubt more than 5% of MBP buyers would select it as an option. So unless it's going to be a $1000 option (or some other ridiculous number that most people wouldn't want to fork out the money for), the ROI just isn't there.

I think this typified by how few PC laptops also exist with this option. There are some, yes, but it's not ubiquitous. You cannot simply select any laptop model and get it with a cellular option.
Yeah OEMs require this to be selected when using "build your own". Even though the motherboard often can accept the adapter, the antenna hardware isn't there.

MS has it on one surface and it's the X running crappy Win ARM.

As far as tethering goes, it does work but there's nothing like opening up your laptop anywhere and having a connection all the time. I was waiting for someone in a parking lot this evening and ran a speed test and was getting over 600 down and 125 up. (Mbps) I remember 20 years ago using an expresscard EVDO modem on big red being amazed at getting near DSL speeds of most offices. Amazing how far things have come!
 
To be fair, AMD is using 4nm for their mobile 7000 chips.

The cost of going 3nm is probably why Apple is cutting cores and bandwidth in their M3 chips. I also wouldn't be shocked if the 3nm TSMC process is full of defects and the M3 Pro chips are probably cut down due to this. Apple seems to have made real performance improvements, but this allows them to cut down cores and bandwidth while also still being technically faster. The thing that's really telling is the M3 Max, where it seems Apple put a lot more effort into as they actually added more cores instead of taking them away. Another situation where Apple is telling it's consumers to buy the more expensive part or else. The M3 Max is a monolithic die, meaning that Apple is really up the nose in cost for this chip to be made. Unlike AMD who is saving money in manufacturing by using their chiplet design, as well as Intel who's Meteor Lake will have the CPU and GPU manufactured separately in what they call tiling. With AMD's chiplet and Intel's tiling, they are also increasing performance, because they can bin better. When Apple makes their M3 Max chips, they will have some defects on them and that will require them to either reduce clocks or cut off parts that maybe defective like cores or pipelines for bandwidth. What Apple supporters don't understand is that the reason Apple makes monolithic chips is not because it's better, but because they don't have the engineering capability. Just like how unified memory is not better but worse, and you can't upgrade the ram. Unified memory is always done to cut costs, and not increase performance. This is why the Xbox's and Playstation's have been doing it for over a decade. Also, the Apple fans need to put down the Geekbench crack pipe. They should stop with all the synthetic benchmarks while they're at it. We've seen comparisons with Geekbench that make it seem like the Apple Silicon is so much faster, but when tested on an actual application, it's so much slower. Good chance when reviewers actually run real world applications against the M2 and M1, they will find that the performance Apple was promising with the M3 will be very disappointing.

How about this: instead of waiting for Linus, lets do this ourselves.

We each choose 3 cross-platform CPU benchmarks. I'll buy a loaded M3 Max (8tb SSD, 128gb ram, etc) and bench it against your 5700x at stock speeds (but you can leave the watercooling). If the M3 wins you have to put "I love apple" in your signature for six months.

After the M3 CPU smokes your desktop ryzen, for fun I guess we can also do some GPU related productivity benchmarks, though I have no clue how it would fair against your full size 200w+ GPU.
 
How about this: instead of waiting for Linus, lets do this ourselves.

We each choose 3 cross-platform CPU benchmarks. I'll buy a loaded M3 Max (8tb SSD, 128gb ram, etc) and bench it against your 5700x at stock speeds (but you can leave the watercooling). If the M3 wins you have to put "I love apple" in your signature for six months.

After the M3 CPU smokes your desktop ryzen, for fun I guess we can also do some GPU related productivity benchmarks, though I have no clue how it would fair against your full size 200w+ GPU.
From what I can tell the 5700X just edges out the M2 Max 12-core CPU in SMP in general, so the M3 would most likely be faster overall.
From a cost perspective, the 5700X platform with similar specs would most likely be far less than that loaded M3 Max you referenced.

Upping an M2 Max Mac Studio with 128GB unified RAM increases the cost by $1200 USD alone, and the 8TB SSD increases the cost by $2400 USD by itself, so yeah, you are definitely going to pay the price for that "low power". :whistle:
I think a better analysis would be a cost-for-cost comparison, assuming it can be done since Apple does make one pay for those specs.

It all depends on the user's requirements and tasks that the system will be used for.
 
From what I can tell the 5700X just edges out the M2 Max 12-core CPU in SMP in general, so the M3 would most likely be faster overall.
From a cost perspective, the 5700X platform with similar specs would most likely be far less than that loaded M3 Max you referenced.

Upping an M2 Max Mac Studio with 128GB unified RAM increases the cost by $1200 USD alone, and the 8TB SSD increases the cost by $2400 USD by itself, so yeah, you are definitely going to pay the price for that "low power". :whistle:
I think a better analysis would be a cost-for-cost comparison, assuming it can be done since Apple does make one pay for those specs.

It all depends on the user's requirements and tasks that the system will be used for.

When you are using the machine to make money the cost is so irrelevant it's hilarious, especially when there is no PC laptop in the world that can do what the MBP can.
 
I would like to see these reviews. Sadly, I'm waiting for Linus Tech Tips to do the M3, even though their reviews aren't good, but still better than most Apple only reviewers.

Doesn't sound very significant then. Apple's problem is their products aren't going beyond existing customers. They know an M2 owner won't upgrade, but they want M1 owners to upgrade and they want Windows laptop users to switch.

It's hard to find tests done on AMD's mobile 7000 series unplugged but from what I've seen it lasts just as long as Apples. Gotta remember the Steam Deck is x86 and lasts up to 2 hours on battery while playing games and won't slow down when unplugged, unlike the Switch.
https://www.notebookcheck.net/AMD-R...s-ideally-as-efficient-as-Apple.713395.0.html
My personal laptop is a Ryzen 7 7735HS with a Radeon 7600S GPU. What do you want me to test?
 
How about this: instead of waiting for Linus, lets do this ourselves.

We each choose 3 cross-platform CPU benchmarks. I'll buy a loaded M3 Max (8tb SSD, 128gb ram, etc) and bench it against your 5700x at stock speeds (but you can leave the watercooling). If the M3 wins you have to put "I love apple" in your signature for six months.

After the M3 CPU smokes your desktop ryzen, for fun I guess we can also do some GPU related productivity benchmarks, though I have no clue how it would fair against your full size 200w+ GPU.
Keep in mind that my 5700X was bought used for $170 off Ebay, so it wasn't bought for performance but because there was a fire sale of people who over paid for computer parts in the past 3 years. If I wanted to build a PC that was fast, I would buy a Ryzen 7 7950X3D, along with a RX 7800 XTX because I use Linux and AMD is best on Linux. At best it would cost $3,700, and if it happens to be faster than a M3 Max then so much the better. While your tricked out M3 Max would cost over $7k.

Keep in mind that system would focus on gaming, because it's not something that Apple is good at. Even with Linux, I can run more games with higher performance than an Apple M3 Max user could. If I wanted to build it for productivity, and by that I mean for video rendering then I would have put in an Nvidia GPU like a RTX 4090. After all, I do have a much higher budget with another $3k to match the cost of that tricked out M3 Max. I might as well put in a ThreadRipper CPU or equivalent. The PC I use is built for budget. The ancient Vega 56 along with the B350 motherboard should have clued you in. The custom water cooling is a mishmash of Chinese stuff that I put together years ago. I had this water cooling setup back when I used my AMD FX 8350. If by any chance this setup could compete with a M3 Max, then that says something about the hardware I choose.

PCPartPicker Part List

CPU: AMD Ryzen 9 7950X3D 4.2 GHz 16-Core Processor ($641.35 @ Amazon)
Motherboard: MSI B650 GAMING PLUS WIFI ATX AM5 Motherboard ($179.99 @ Amazon)
Memory: Kingston FURY 128 GB (4 x 32 GB) DDR5-5200 CL40 Memory ($442.99 @ Amazon)
Storage: Oyen Digital E18-8TBICS5 8 TB M.2-2280 PCIe 4.0 X4 NVME Solid State Drive ($759.00 @ Amazon)
Video Card: Sapphire PULSE Radeon RX 7900 XTX 24 GB Video Card ($949.99 @ Amazon)
Case: Deepcool CC560 ATX Mid Tower Case ($59.99 @ Newegg)
Power Supply: Corsair RM850x (2021) 850 W 80+ Gold Certified Fully Modular ATX Power Supply ($131.72 @ Amazon)
Monitor: Samsung Odyssey G7 27.0" 2560 x 1440 240 Hz Curved Monitor ($528.00 @ Amazon)
Total: $3693.03
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2023-11-08 15:07 EST-0500
When you are using the machine to make money the cost is so irrelevant it's hilarious, especially when there is no PC laptop in the world that can do what the MBP can.
Any laptop with a AMD Ryzen 9 7940HS can do what Apple can do, even unplugged it keeps it's performance. Something like the Asus Zephyrus G14 with the AMD Ryzen 9 7940HS. Not only that but most of them are equipped with a RTX GPU, so either way it beats Apple.
 
Keep in mind that my 5700X was bought used for $170 off Ebay, so it wasn't bought for performance but because there was a fire sale of people who over paid for computer parts in the past 3 years. If I wanted to build a PC that was fast, I would buy a Ryzen 7 7950X3D, along with a RX 7800 XTX because I use Linux and AMD is best on Linux. At best it would cost $3,700, and if it happens to be faster than a M3 Max then so much the better. While your tricked out M3 Max would cost over $7k.

Keep in mind that system would focus on gaming, because it's not something that Apple is good at. Even with Linux, I can run more games with higher performance than an Apple M3 Max user could. If I wanted to build it for productivity, and by that I mean for video rendering then I would have put in an Nvidia GPU like a RTX 4090. After all, I do have a much higher budget with another $3k to match the cost of that tricked out M3 Max. I might as well put in a ThreadRipper CPU or equivalent. The PC I use is built for budget. The ancient Vega 56 along with the B350 motherboard should have clued you in. The custom water cooling is a mishmash of Chinese stuff that I put together years ago. I had this water cooling setup back when I used my AMD FX 8350. If by any chance this setup could compete with a M3 Max, then that says something about the hardware I choose.

PCPartPicker Part List

CPU: AMD Ryzen 9 7950X3D 4.2 GHz 16-Core Processor ($641.35 @ Amazon)
Motherboard: MSI B650 GAMING PLUS WIFI ATX AM5 Motherboard ($179.99 @ Amazon)
Memory: Kingston FURY 128 GB (4 x 32 GB) DDR5-5200 CL40 Memory ($442.99 @ Amazon)
Storage: Oyen Digital E18-8TBICS5 8 TB M.2-2280 PCIe 4.0 X4 NVME Solid State Drive ($759.00 @ Amazon)
Video Card: Sapphire PULSE Radeon RX 7900 XTX 24 GB Video Card ($949.99 @ Amazon)
Case: Deepcool CC560 ATX Mid Tower Case ($59.99 @ Newegg)
Power Supply: Corsair RM850x (2021) 850 W 80+ Gold Certified Fully Modular ATX Power Supply ($131.72 @ Amazon)
Monitor: Samsung Odyssey G7 27.0" 2560 x 1440 240 Hz Curved Monitor ($528.00 @ Amazon)
Total: $3693.03
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2023-11-08 15:07 EST-0500


Any laptop with a AMD Ryzen 9 7940HS can do what Apple can do, even unplugged it keeps its performance. Something like the Asus Zephyrus G14 with the AMD Ryzen 9 7940HS. Not only that but most of them are equipped with a RTX GPU, so either way it beats Apple.
Can do assuming Microsoft has their shit together which is a bit of a stretch.

Equivalent hardware for sure but significantly different levels of integration.
 
Keep in mind that my 5700X was bought used for $170 off Ebay, so it wasn't bought for performance but because there was a fire sale of people who over paid for computer parts in the past 3 years. If I wanted to build a PC that was fast, I would buy a Ryzen 7 7950X3D, along with a RX 7800 XTX because I use Linux and AMD is best on Linux. At best it would cost $3,700, and if it happens to be faster than a M3 Max then so much the better. While your tricked out M3 Max would cost over $7k.

Keep in mind that system would focus on gaming, because it's not something that Apple is good at. Even with Linux, I can run more games with higher performance than an Apple M3 Max user could. If I wanted to build it for productivity, and by that I mean for video rendering then I would have put in an Nvidia GPU like a RTX 4090. After all, I do have a much higher budget with another $3k to match the cost of that tricked out M3 Max. I might as well put in a ThreadRipper CPU or equivalent. The PC I use is built for budget. The ancient Vega 56 along with the B350 motherboard should have clued you in. The custom water cooling is a mishmash of Chinese stuff that I put together years ago. I had this water cooling setup back when I used my AMD FX 8350. If by any chance this setup could compete with a M3 Max, then that says something about the hardware I choose.

PCPartPicker Part List

CPU: AMD Ryzen 9 7950X3D 4.2 GHz 16-Core Processor ($641.35 @ Amazon)
Motherboard: MSI B650 GAMING PLUS WIFI ATX AM5 Motherboard ($179.99 @ Amazon)
Memory: Kingston FURY 128 GB (4 x 32 GB) DDR5-5200 CL40 Memory ($442.99 @ Amazon)
Storage: Oyen Digital E18-8TBICS5 8 TB M.2-2280 PCIe 4.0 X4 NVME Solid State Drive ($759.00 @ Amazon)
Video Card: Sapphire PULSE Radeon RX 7900 XTX 24 GB Video Card ($949.99 @ Amazon)
Case: Deepcool CC560 ATX Mid Tower Case ($59.99 @ Newegg)
Power Supply: Corsair RM850x (2021) 850 W 80+ Gold Certified Fully Modular ATX Power Supply ($131.72 @ Amazon)
Monitor: Samsung Odyssey G7 27.0" 2560 x 1440 240 Hz Curved Monitor ($528.00 @ Amazon)
Total: $3693.03
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2023-11-08 15:07 EST-0500


Any laptop with a AMD Ryzen 9 7940HS can do what Apple can do, even unplugged it keeps it's performance. Something like the Asus Zephyrus G14 with the AMD Ryzen 9 7940HS. Not only that but most of them are equipped with a RTX GPU, so either way it beats Apple.

So you're not willing to do it? Afraid of a little laptop?

Regarding your Ryzen laptop comments doing "anything" apple can do, lets run some large dataset machine learning benchmarks. I'll even do it on battery! The datasets won't even fit in the Ryzen laptop's GPU RAM. It will chug worse than your watercooled desktop video editing while trying to keep up with a Macbook (on battery power, without the fans even spinning up).
 
So you're not willing to do it? Afraid of a little laptop?

Regarding your Ryzen laptop comments doing "anything" apple can do, lets run some large dataset machine learning benchmarks. I'll even do it on battery! The datasets won't even fit in the Ryzen laptop's GPU RAM. It will chug worse than your watercooled desktop video editing while trying to keep up with a Macbook (on battery power, without the fans even spinning up).
Or using it on an airplane! ;-)
 
So you're not willing to do it? Afraid of a little laptop?

Regarding your Ryzen laptop comments doing "anything" apple can do, lets run some large dataset machine learning benchmarks. I'll even do it on battery! The datasets won't even fit in the Ryzen laptop's GPU RAM. It will chug worse than your watercooled desktop video editing while trying to keep up with a Macbook (on battery power, without the fans even spinning up).
His laptop will die after an hour of being unplugged.
 
So you're not willing to do it? Afraid of a little laptop?
You mean test my existing years old PC against a new M3 Max? Or a newer laptop based on the AMD Ryzen 9 7940HS? With what tests? Lets install our favorite Linux distro on these machines and find out. Oh sorry, Asahi Linux is still far from being a daily driver on Apple hardware. Lets run benchmarks on games like Starfield and Jedi Survivor. That's right, Apple is for productivity so lets fire up 3ds Max. You know what, lets run a VM instead. I'm sure that 8GB of ram that you're stuck with will magically turn into 16GB for a VM. You see the problem here?

Unless we run a very narrow band of software that isn't being emulated on the Mac, then it's a clear win for any x86 machine. If you want to talk about performance while unplugged then yes a Ryzen 9 7940HS will do that. Jarrod'sTech who reviewed the Razer Blade 14 did note that the Asus Zepherus G14 did nearly as good or better as the M2 Max when unplugged. He intentionally used the worse laptop when it comes to the AMD Ryzen 9 7940HS.

View: https://youtu.be/-zYVEvoJIS4?t=223

Regarding your Ryzen laptop comments doing "anything" apple can do, lets run some large dataset machine learning benchmarks. I'll even do it on battery! The datasets won't even fit in the Ryzen laptop's GPU RAM. It will chug worse than your watercooled desktop video editing while trying to keep up with a Macbook (on battery power, without the fans even spinning up).
There's a lot of what ifs going on there. What GPU ram, because there's the 780m and then there's the plethora of RTX GPU's that tend to be included? You don't have a dataset, you don't have anything. In GPU tests that aren't Geekbench, an RTX 3060 destroys the M2 Max. So I assume that any RTX-40 series GPU on a Ryzen 9 7940HS will continue to destroy it in performance. You don't even own an M3 Max. Not even sure what you're using now.
 
The only impressive chip from the new M3s is the Max. It's quite a huge leap in performance in a laptop. It goes head to head with the M2 Ultra, which is basically two M2 Max chips. So ... that's kind of insane. Everything else is just a bit meh. The base 14" ... which can still only connect to one display ... is stupid.
That and the M3 Pro is not really any better (and sometimes worse) than the M2 Pro due to the switch from 8P+4E to 6P+6E core configs. I can only speculate that was for battery life, but still kinda lame.
 
That and the M3 Pro is not really any better (and sometimes worse) than the M2 Pro due to the switch from 8P+4E to 6P+6E core configs. I can only speculate that was for battery life, but still kinda lame.
I do wish the M3 Pro was better... with that said, battery life is the name of the game for me, so I won't mind so much if I get one and it lasts an hour or two longer.
 
Plenty of people use Asahi as a daily driver.

I love my air cuz not only does it have super fast ram, but when I use it heavily for work each day all day, I still only have to charge it every other night. I never take a charger with me anywhere.
 
That and the M3 Pro is not really any better (and sometimes worse) than the M2 Pro due to the switch from 8P+4E to 6P+6E core configs. I can only speculate that was for battery life, but still kinda lame.
That’s half of it, but I bet Apple looked at their collected logs and metrics and saw the average user of the M1 platform wasn’t running it hard enough where they would notice the difference and those few who would could be upsold to an M2 Max or we’re going to be holding out for the M3 anyways.
Or they were still using Intel based Mac’s and it was going to be a massive upgrade regardless.
 
That’s half of it, but I bet Apple looked at their collected logs and metrics and saw the average user of the M1 platform wasn’t running it hard enough where they would notice the difference and those few who would could be upsold to an M2 Max or we’re going to be holding out for the M3 anyways.
Or they were still using Intel based Mac’s and it was going to be a massive upgrade regardless.
I bet it's more for power efficiency. Either that or this is the result of 3nm manufacturing defects and this is the compromise Apple went with.
 
You mean test my existing years old PC against a new M3 Max? Or a newer laptop based on the AMD Ryzen 9 7940HS? With what tests? Lets install our favorite Linux distro on these machines and find out. Oh sorry, Asahi Linux is still far from being a daily driver on Apple hardware. Lets run benchmarks on games like Starfield and Jedi Survivor. That's right, Apple is for productivity so lets fire up 3ds Max. You know what, lets run a VM instead. I'm sure that 8GB of ram that you're stuck with will magically turn into 16GB for a VM. You see the problem here?

Unless we run a very narrow band of software that isn't being emulated on the Mac, then it's a clear win for any x86 machine. If you want to talk about performance while unplugged then yes a Ryzen 9 7940HS will do that. Jarrod'sTech who reviewed the Razer Blade 14 did note that the Asus Zepherus G14 did nearly as good or better as the M2 Max when unplugged. He intentionally used the worse laptop when it comes to the AMD Ryzen 9 7940HS.

View: https://youtu.be/-zYVEvoJIS4?t=223


There's a lot of what ifs going on there. What GPU ram, because there's the 780m and then there's the plethora of RTX GPU's that tend to be included? You don't have a dataset, you don't have anything. In GPU tests that aren't Geekbench, an RTX 3060 destroys the M2 Max. So I assume that any RTX-40 series GPU on a Ryzen 9 7940HS will continue to destroy it in performance. You don't even own an M3 Max. Not even sure what you're using now.

Nobody that needs to be mobile cares about any of this if they need to keep their laptop plugged in to do work for hours on end. You’re also talking out of your arse, which is not surprising. Until Windows laptops can stop sucking at battery life when unplugged and on the go, then it’s all irrelevant. If anyone can get there, it’ll be AMD, but they’re not there yet. I’ve also owned the latest Zephyrus G14 hoping it would be comparable in battery life. And it sucks balls just as all Windows laptops still do for battery life. I could get about 7 or so hours out of it in lower power mode with the GPU disabled, but in “power mode”, it would last about 2-3 hours. The discussion of power to battery life efficiency is still a non-conversation for Windows and that will still be true for years. The latest “40” series GPUs in laptops also require 350+ watt power bricks to get their full potential. Again, being more powerful is irrelevant if it gets 2 hours of battery life.
 
Last edited:
I used to haul around a 7 kglaptop with a 2kg power supply. It had the first dual core AMD chip, might have been a desktop part. raided hard drives. 17" screen. maybe it had sli gpus? I had the first model Dell 24" screen wide HD LCD with a handle I bolted to the top. I traveled 'door to door' selling broadcast automation software to tv stations, and I could run a whole station on that thing. Which was more a testament to the software.

If there was a windows laptop I could stick a bunch of GPU's into to get as much ram as a Mac... I'd probably do it. But there isn't. So now I'm pedaling Macs. fml.
 
Nobody that needs to be mobile cares about any of this if they need to keep their laptop plugged in to do work for hours on end.
I think you missed the part where AMD's Dragon Range caught up. The Asus Zephyrus G14 laptop that I mentioned will performance nearly as well as Apple's when unplugged. According to Jarrod'sTech video, the Asus lasted for 560 minutes when playing a YouTube video while the Apple M2 Max lasted for 748 minutes. That's pretty good considering that AMD's idle power draw is higher. Also, these are gaming laptops with 165Hz and 240Hz displays and RGB lightening, so of course they eat more power.
You’re also talking out of your arse, which is not surprising.
This malaka here.
Until Windows laptops can stop sucking at battery life when unplugged and on the go, then it’s all irrelevant.
Maybe you missed the reviews the first few times I've linked it, so I'll link it again.
https://www.notebookcheck.net/AMD-R...s-ideally-as-efficient-as-Apple.713395.0.html
If anyone can get there, it’ll be AMD, but they’re not there yet.
AMD is not the one Apple is worried about. Why you think Apple was quick to release their M3 news? I still can't even find a review of the new M3 chips. Intel is suppose to release their Meteor Lake chips soon.
I’ve also owned the latest Zephyrus G14 hoping it would be comparable in battery life. And it sucks balls just as all Windows laptops still do for battery life. I could get about 7 or so hours out of it in lower power mode with the GPU disabled, but in “power mode”, it would last about 2-3 hours.
That's really good. What do you think those M1 laptops get when playing games like World of Warcraft? It ain't 2-3 hours without turning down brightness and lowering graphic settings.

View: https://youtu.be/jYSMfRKsmOU?si=UREM0sWuPImS3ahk
The discussion of power to battery life efficiency is still a non-conversation for Windows and that will still be true for years. The latest “40” series GPUs in laptops also require 350+ watt power bricks to get their full potential. Again, being more powerful is irrelevant if it gets 2 hours of battery life.
The problem that you and many Apple faithful seem to gloss over is that the magical efficiency only happens under very specific conditions. Fire up a multi-threaded application and the power consumption is no different between AMD and Apple. Fire up a game and suddenly 2 hours of battery would be an achievement. You keep comparing Apple GPU's to that of Nvidia, as if they're comparable. Everytime a GPU application is tested, even the M2 Max can only wish to perform as well as the RTX 3060. Apple is now getting features like Mesh Shaders and Ray-Tracing, 4 years after Nvidia, 3 years after AMD, and one year after Intel's A770.
 
Back
Top