Intel Core Ultra 7 155H Meteor Lake Linux benchmarks

Marees

2[H]4U
Joined
Sep 28, 2018
Messages
2,184
Phoronix has done intial benchmarking of CPU only performance of Intel's new mobile APU & compared vs AMD's existing zen 4 APU

Big win for Intel: day-0 & day-1 availability unlike AMD that takes weeks or months to deliver
Intel also won PHP scripting

Win for AMD: Almost everything else

Those wanting to see all 370 benchmarks in full along with all of the per-test CPU power consumption numbers for this head-to-head comparison can do so via this OpenBenchmarking.org result page.

https://www.phoronix.com/review/intel-core-ultra-7-155h-linux/12
 
“When taking the geometric mean of all 370 benchmark results, the Ryzen 7 7840U enjoyed a 28% lead over the Intel Core Ultra 7 155H in these Linux CPU performance benchmarks. This was all the while the Ryzen 7 7840U was delivering similar or lower power consumption than the Core Ultra 7 155H with these tests on Ubuntu 23.10 with the Linux 6.7 kernel at each system's defaults. The Core Ultra 7 155H also had a tendency to have significantly higher power spikes than the Ryzen 7 7840U.”

ML having a 28% performance loss at equal power is not the core2 duo Centrino moment Intel initially positioned it to be.
 
I take this more as a comparison between an Acer Swift Go 14 (Intel), and the Framework 13 (AMD).

I’ll hold out for a few other tests, but the Acer’s are notoriously bad, and the Framewoks are just awesome.

Still not a great first showing though.
 
Last edited:
I take this more as a comparison between an Acer Swift Go 14 (Intel), and the Framework 13 (AMD).

I’ll hold out for a few other tests, but the Acer’s are notoriously bad, and the Framewoks are just awesome.

Still not a great first showing though.
By contrast Intel Arc Graphics performance with Meteor Lake is rather the complete opposite of yesterday's look at the CPU side performance.

Intel wins both in performance & power in GPU benchmarks on Linux

Now to check windows & direct 3D performance ...

https://www.phoronix.com/review/meteor-lake-arc-graphics/5
 
Now to check windows & direct 3D performance ...
It’s about 8% faster in gaming as an average over the AMD offerings, but 70% faster than Intels best last gen offerings. Lots of driver issues with things trying to use the NPU though, so that’s an issue, most still using the GPU and not touching the actual accelerators.
 
Not a good showing for Intel.
I don't agree. People were hoping that Meteor Lake would be the new fastest CPU from Intel, when in reality this is suppose to be a mobile only chip that's meant to be power efficient. In that regard it's doing very well. Unplugging it doesn't change the performance unless you use both the GPU and CPU like in a game, but AMD and Apple both lose performance in that area when unplugged as well. The GPU performance is rather good, even though it's reported to go up and down a lot, so probably needs fixed drivers.

Geekbench should never be used, and the new Cinebench R24 seems to favor Apple like Geekbench. Nobody should run Geekbench and Cinebench R24 for testing. Phoronix actually did real world tests and they seem to show Meteor Lake in a better light. Battery performance and power consumption would have been better tests. I think this is a good first step for Intel in making a power efficient mobile APU. I wouldn't buy one unless it was cheaper than AMD's offering.

View: https://youtu.be/05joCv0j_Cc?si=RUlAQjzrSdKVUpjb
 
I don't agree. People were hoping that Meteor Lake would be the new fastest CPU from Intel, when in reality this is suppose to be a mobile only chip that's meant to be power efficient. In that regard it's doing very well. Unplugging it doesn't change the performance unless you use both the GPU and CPU like in a game, but AMD and Apple both lose performance in that area when unplugged as well. The GPU performance is rather good, even though it's reported to go up and down a lot, so probably needs fixed drivers.

Geekbench should never be used, and the new Cinebench R24 seems to favor Apple like Geekbench. Nobody should run Geekbench and Cinebench R24 for testing. Phoronix actually did real world tests and they seem to show Meteor Lake in a better light. Battery performance and power consumption would have been better. I think this is a good first step for Intel in making a power efficient mobile APU. I wouldn't buy one unless it was cheaper than AMD's offering.

View: https://youtu.be/05joCv0j_Cc?si=RUlAQjzrSdKVUpjb

Agreed, NPU driver issues aside this is the first Intel mobile platform to stand along side AMD and Apple in a long while.

The CPU is good enough and the GPU is no slouch, Intel balanced performance pretty well. Much more GPU and the CPU would be a noticeable bottleneck and much more CPU and the GPU would be holding it back.
The NPU holds promises, and I don’t know if it’s current issues are driver, software, or OS related, probably a wonderful combination of the 3.

This chip is proof that competition is working, and it’s available in large quantities. Apple is delivering a solid platform but $$$$, AMD is right up there but the offerings are scarce and the wait times are long, Intel coming out with something that does the job it set out to do at a price that isn’t terrible and is easily produced.

Assuming Intel keeps on track with its gaming drivers you would be hard pressed to be upset with its gaming or office workload performance given its price.

Note: The new Cinebench and Maxon software in general has been heavily optimized for the Apple silicon.
https://www.maxon.net/en/article/ma...imized-for-mac-lineup-with-m3-family-of-chips

So it would be weird if it didn’t show good numbers there.
 
Last edited:
The NPU holds promises, and I don’t know if it’s current issues are driver, software, or OS related, probably a wonderful combination of the 3.
Max Tech did test this and showed that Apple was fantastically faster. What test they used I don't know. How does this benefit users, I don't know.
Note: The new Cinebench and Maxon software in general has been heavily optimized for the Apple silicon.
https://www.maxon.net/en/article/ma...imized-for-mac-lineup-with-m3-family-of-chips

So it would be weird if it didn’t show good numbers there.
Ok but how? The link seems to explain that they make use of Apple's Ray-Tracing hardware, which is what they use for their Maxon software, but does this mean they also use it to boost Cinebench R24 scores? That would be a heavy bias to use GPU Ray-Tracing to boost CPU benchmark scores. You don't see this manifest in any real world applications, so I have to wonder. This is why I don't like synthetic tests, because it's very easy to tweak the test in someones favor. Where as in a real world situation like video rendering or photo editing it's hard to fake it to make it. Like if Handbrake was optimized for M3 then that would make sense, because who wouldn't want that? It does make sense for Maxon to make use of Ray-Tracing hardware because their software does Ray-Tracing, but it does bring in question to the validity of their R24 scores.
 
Max Tech did test this and showed that Apple was fantastically faster. What test they used I don't know. How does this benefit users, I don't know.
Apple integrates its accelerators into everything from audio processing to decoding online streaming services to image upscaling so there are lots of little background tasks that we take for granted in Gaming or production that Apple just made a normal thing.
So there are some real day-to-day benefits, but you have to put them on a platform that uses them and Microsoft has so far flopped on that.
 
M This is why I don't like synthetic tests, because it's very easy to tweak the test in someones favor. Where as in a real world situation like video rendering or photo editing it's hard to fake it to make it. Like if Handbrake was optimized for M3 then that would make sense, because who wouldn't want that?

Wait a minute, aren't you the guy always saying that Apple is "cheating" with tasks like video editing and transcoding because the software is optimized for their hardware?
 
Apple integrates its accelerators into everything from audio processing to decoding online streaming services to image upscaling so there are lots of little background tasks that we take for granted in Gaming or production that Apple just made a normal thing.
So there are some real day-to-day benefits, but you have to put them on a platform that uses them and Microsoft has so far flopped on that.
This is the crux of it that DukenukemX doesn't understand - we are witnessing the shift away from general purpose "cpus" as a monolithic generalist compute entity towards a "cpu" being a collection of specialized accelerators that share a die or package. At some point, people will look at "cpu" benchmarks kind of funny because they are irrelevant compared to the task benchmarks that the "cpu" has hardware acceleration for. Optimizing for a specific benchmark will be no different than optimizing for a use case - if the accelerator is there, it's just a matter of software support.

This is not an Apple thing; everyone is moving in that direction. Yes, there will always be some general purpose compute resources, but the importance of them will gradually reduce over time as various types of hardware acceleration and the software support for it advances. The easiest way to boost performance and conserve power is with hardware acceleration for as many different use cases as possible. Unfortunately for AMD and Intel, the software support for this is lagging behind a bit on Windows but it will catch up in time.

If you look at something like a Xilinx Ultrascale MPSoC, I think it's a perfect example. Multiple types of processors ranging from standard ARM to real-time, a GPU, dedicated transceivers, programmable logic, tons of DSP slices, etc. All on a single piece of silicon that looks like a "cpu" to the layman. It would be pointless to run a cpu benchmark on this using the arm cores since whatever application the chip is selected for is going to have custom hardware acceleration written for it on the programmable logic side that will absolutely smoke the CPU.
 
Last edited:
This is the crux of it that DukenukemX doesn't understand - we are witnessing the shift away from general purpose "cpus" as a monolithic generalist compute entity towards a "cpu" being a collection of specialized accelerators that share a die or package. At some point, people will look at "cpu" benchmarks kind of funny because they are irrelevant compared to the task benchmarks that the "cpu" has hardware acceleration for. Optimizing for a specific benchmark will be no different than optimizing for a use case - if the accelerator is there, it's just a matter of software support.

This is not an Apple thing; everyone is moving in that direction. Yes, there will always be some general purpose compute resources, but the importance of them will gradually reduce over time as various types of hardware acceleration and the software support for it advances. The easiest way to boost performance and conserve power is with hardware acceleration for as many different use cases as possible. Unfortunately for AMD and Intel, the software support for this is lagging behind a bit on Windows but it will catch up in time.

If you look at something like a Xilinx Ultrascale MPSoC, I think it's a perfect example. Multiple types of processors ranging from standard ARM to real-time, a GPU, dedicated transceivers, programmable logic, tons of DSP slices, etc. All on a single piece of silicon that looks like a "cpu" to the layman. It would be pointless to run a cpu benchmark on this using the arm cores since whatever application the chip is selected for is going to have custom hardware acceleration written for it on the programmable logic side that will absolutely smoke the CPU.
Just not familiar with it, Apple logs everything so they know exactly what their user's batteries are getting eaten by, so they accelerate the simple things that they can, and because they control the platform from top to bottom they just do it all in the background because they know exactly what hardware is there, Apple gets console levels of optimization available to them, something nobody else can.

Even Android and its collectively disorganized AI acceleration is a solid decade ahead of the PC marketplace, so you can get a $0 upgrade Android phone and have it out of the box in a better place for upscaling and decoding video streams, audio recording, taking videos, editing images, cleaning up audio streams, correcting spelling or grammar, and even optimizing network packets for better-loading web pages. Microsoft doesn't build it in because so few Laptops are capable of it because you would need Nvidia GPUs to do it, AMD and Intel have delivered nothing in those categories to date, and the Laptop space is losing badly to the Tablet and Phone market for day to day. You can take and clean up photos taken on any reasonably new phone easier, faster, and using dramatically less energy on the said phone than you can on any laptop that costs the same as said phone, and that's is both sad and awesome at the same time. A Macbook looks and sounds far better in any Zoom or Teams meeting than a similarly priced Windows machine, and often a lot better than the expensive laptops because those often get worse cameras and microphones because they just assume the users will be using an external solution, so it's just something they can cut unnoticed, but the Apple camera and microphone aren't that much better but the AI enhancements working on those devices to make them look and sound better than they actually are is industry-leading. Machine Learning or AI as it's liked to be called does a lot of little creature feature comfort things in the background that go 100% unnoticed because nobody cares why the website loads 1s faster, just that it does, they don't want to know the science behind why they can just talk into their device while their coworker needs a headset, they just get to be better than that person who needs it. So AI and Machine learning have progressed to a seamless place where they are getting used by billions of people every day for basic everyday tasks they 100% take for granted, but you sit down on your PC on your expensive laptop or desktop and you certainly notice that while it may have a bigger screen that little device to your right does it better, easier, and often faster. There is a solid reason why PC sales go down a little every year, it's because while they have vastly better functionality, they as a platform are failing to deliver a better experience, and Mobile be it iPhone or Android is doing the same job, easier, faster, and more conveniently. So Laptops in particular need to step up their game or find themselves replaced by a Galaxy Tab or an iPad for the bulk of the average users day to day, the one big holdout was keyboards, and you can get cases with keyboards cheap.
 
a problem of this test would have to be addressed, which can also be found in a similar form in many other tests on mobile devices: The hardware testers often do not pay attention to the best possible comparability with regard to power limits and memory clocking, nor is there at least sufficient documentation of this data. In the specific case, there was an Intel processor with 45 / 55W power limits including LPDDR5 / 6400 and an AMD processor with 35 / 51W power limits including DDR5 / 5600. Unfortunately, these specification information could not be found in the test report itself, but had to be laboriously collected on other websites. Rather, the Phoronix test report only named the official TDP information from AMD & Intel, which are even identical with 28 watts of TDP – and thus sent the reader on the wrong track to ask whether these processors really run with the same power limits in the specific notebooks.

The ideal – is undoubtedly difficult to achieve the same specifications for all test candidates – in the mobile field. But at least the differences should be documented, and this should also be mentioned in the results comment. The only source where you usually get complete specification information is Notebook check – which, however, also lacked the information on the memory clocking of the Acer notebook in the specific case.

https://3dcenter.org/news/news-des-20-dezember-2023
 
Wait a minute, aren't you the guy always saying that Apple is "cheating" with tasks like video editing and transcoding because the software is optimized for their hardware?
I said that people were confusing ARM's performance to hardware video encoding performance. Tests like these from Phoronix are far batter because it just isolates the ARM CPU.
This is the crux of it that DukenukemX doesn't understand - we are witnessing the shift away from general purpose "cpus" as a monolithic generalist compute entity towards a "cpu" being a collection of specialized accelerators that share a die or package. At some point, people will look at "cpu" benchmarks kind of funny because they are irrelevant compared to the task benchmarks that the "cpu" has hardware acceleration for. Optimizing for a specific benchmark will be no different than optimizing for a use case - if the accelerator is there, it's just a matter of software support.

This is not an Apple thing; everyone is moving in that direction. Yes, there will always be some general purpose compute resources, but the importance of them will gradually reduce over time as various types of hardware acceleration and the software support for it advances. The easiest way to boost performance and conserve power is with hardware acceleration for as many different use cases as possible. Unfortunately for AMD and Intel, the software support for this is lagging behind a bit on Windows but it will catch up in time.

If you look at something like a Xilinx Ultrascale MPSoC, I think it's a perfect example. Multiple types of processors ranging from standard ARM to real-time, a GPU, dedicated transceivers, programmable logic, tons of DSP slices, etc. All on a single piece of silicon that looks like a "cpu" to the layman. It would be pointless to run a cpu benchmark on this using the arm cores since whatever application the chip is selected for is going to have custom hardware acceleration written for it on the programmable logic side that will absolutely smoke the CPU.
Keep in mind I'm saying that Geekbench and now Cinebench R24 are favoring Apple, but you don't see these results outside of those synthetic tests. When Max Tech did their tests against the M3 Pro vs the Legion Slim 7i i9-13900H and the Intel machine destroyed it, except for Geekbench in both single core and multicore. The Intel machine was faster in Blender, HEVC export, and black magic denoise. Just seems like reviewers are jumping on quick and easy benchmarks like Geekbench and Cinebench without adding anything else. Meteor Lake supports AV1 encoding, but nobody I've seen has tested it. They don't even bother running a quick Tomb Raider test, which is now an old game by todays standards.
 
Max Tech did test this and showed that Apple was fantastically faster. What test they used I don't know. How does this benefit users, I don't know.

Ok but how? The link seems to explain that they make use of Apple's Ray-Tracing hardware, which is what they use for their Maxon software, but does this mean they also use it to boost Cinebench R24 scores? That would be a heavy bias to use GPU Ray-Tracing to boost CPU benchmark scores. You don't see this manifest in any real world applications, so I have to wonder. This is why I don't like synthetic tests, because it's very easy to tweak the test in someones favor. Where as in a real world situation like video rendering or photo editing it's hard to fake it to make it. Like if Handbrake was optimized for M3 then that would make sense, because who wouldn't want that? It does make sense for Maxon to make use of Ray-Tracing hardware because their software does Ray-Tracing, but it does bring in question to the validity of their R24 scores.
Cinebench is usually almost 100% a real world rendering scenario, using the same render tool chain many use in the most popular render product, it make an actual render that you see at the end on your screen, if look at blender rendering with cuda, metal or optix vs without that something we see in the real world all the time for special hardware-software path accelerating that type of workload, raytracing capacity can obviously manifest itself in something like rendering:

Cuda-and-Optix-in-blender-benchmark.jpg


Would a cpu able to run optix and would the real world engine they use to benchmark use it automatically when it is available, it would require to synthetical "cheat" to try to make it run on traditional cpu instruction set only.

I imagine the correlation between cinebench score and realworld result of redshift cinema 4D rendering to be really close, people use Redshit in maya, 3dsmax, blender, etc... and in Cinebench when they run a render. It is synthetic like a game benchmark running an actual Unreal engine 5 game pre-determined sequence would be.

The issue would probably be more because cinebench use too directly a real world product it can make it complicated if that real world product try to use everything from the CPU than the test being too synthetic.

Even Pixar render chain use AI upscaling, cinebench starting to use it would not be something that does not match real world.
 
Last edited:
And the reviewer immediately runs Cinebench. Not Cinebench R24, but R23 where Intel wasn't doing that bad in to begin with. Also says a lot about Intel and how disorganized they are if they didn't realize they made this mistake. Not that first generation Ryzen didn't have a number of issues at release. I had a heck of a time getting ram timing working stable.
 
And the reviewer immediately runs Cinebench. Not Cinebench R24, but R23 where Intel wasn't doing that bad in to begin with. Also says a lot about Intel and how disorganized they are if they didn't realize they made this mistake. Not that first generation Ryzen didn't have a number of issues at release. I had a heck of a time getting ram timing working stable.
There is a valid reason to test something you’re doing well at as a baseline.
It’s sadly not uncommon for updates to “trade” performance, lower something your doing super well at to increase an area your lacking. Or for updates to be little more than a profile tune up to improve bad benchmarks, so if we saw a decrease in R23, or no change, it may indicate it’s less of a platform than improvement and more of an optimization which then frames how you may approach later tests.

Could also be R23 was what they had currently installed and they simply wanted to post something to get it out first in the middle of the Christmas rush and give more details later. Sadly better to have something that’s incomplete than to post nothing and have another site scoop it from you.
 
In the land of "poop", IMHO, the war is over iGPU performance right now. I think this is "fine" as Intel's first foray into Arc iGPU. Holds promise. Intel can "evolve" and be fine (doesn't have to dominate or "win" today). It's not like Intel just lost its crown here, they've been trying to get it back for a bit. I think they will eventually "get there". AMD, again, I think, is resting a bit "too much" (the reward for hard work is... hard work!!). We'll see.

Intel plus discrete in old school <1hr "thick boy" style (mostly non-mobile) laptops, is still what it is there.... Intel does have the opportunity for "thick boy" Arc discrete.... again, we'll see.

Now.... if Intel focuses a bit more on Linux support (they've been very very very lacking of late) and could convince Valve (for example) to go their way, it would really help shore up the Intel Arc iGPU line.

Just being honest, but Valve + Steamdeck (which means Linux) was a huge reason for AMD's current mobile APU success.
 
Linus Tech Tips finally did benchmarks on Intel's 155h and it looks pretty good. They even included Apple's M3 as well as AMD's 7840U. No power consumption or battery tests though, but they did spend 1/3rd of the video explaining the name system for these chips. I'd skip that part.

View: https://youtu.be/gape0TpU3I4?si=Fd7KRapl4BTs_sM0

I won't give him the clicks for that video but the Core Ultra series looks good, would I like a little more performance out of both the CPU and the GPU sure, but overall as a package I have very little to complain about.
Now if Microsoft can get their NPU acceleration working correctly that would be icing on the cake, to be fair though Microsoft's "AI" acceleration within Windows is very limited and sparsely supported for all 3 (Intel, AMD, and Nvidia) so I am not faulting Intel for that.
Ultimately Microsoft taking so GD long to update things to take advantage of accelerators by default is why I believe Intel is pushing their rentable units strategy because why trust others to implement support for your hardware when you can do it for them?
 
IMHO, it's a "bridge product". A first go at "tiles", their form of chiplets. Encouraging, but have a feeling you don't want the first round.
 
IMHO, it's a "bridge product". A first go at "tiles", their form of chiplets. Encouraging, but have a feeling you don't want the first round.
Technically round 2 or 3. Ponte Vecchio was the first 3 years ago clocking in at 63 tiles spread over 1280 sq mm. It still holds up at it’s designated workloads.

But Fab 9 in New Mexico only recently came online so Intel now has the capacity to start delivering it for more than their own government and datacenter contracts.
 
Technically round 2 or 3. Ponte Vecchio was the first 3 years ago clocking in at 63 tiles spread over 1280 sq mm. It still holds up at it’s designated workloads.

But Fab 9 in New Mexico only recently came online so Intel now has the capacity to start delivering it for more than their own government and datacenter contracts.
Maybe I should have said, first in this product space.
 
I won't give him the clicks for that video but the Core Ultra series looks good, would I like a little more performance out of both the CPU and the GPU sure, but overall as a package I have very little to complain about.
His video does show extensive use of benchmarks, and only shown Cinebench once with no Geekbench. He shows it with three $1300 laptops, which makes things very even. The only disappointment was no power consumption tests, which would have made things more interesting.
Now if Microsoft can get their NPU acceleration working correctly that would be icing on the cake, to be fair though Microsoft's "AI" acceleration within Windows is very limited and sparsely supported for all 3 (Intel, AMD, and Nvidia) so I am not faulting Intel for that.
Ultimately Microsoft taking so GD long to update things to take advantage of accelerators by default is why I believe Intel is pushing their rentable units strategy because why trust others to implement support for your hardware when you can do it for them?
I'm not sure what use Microsoft could find for these NPU's, but Linus Tech Tips did test it and while the GPU was faster it did use more power. The problem is that Microsoft is also looking to rent out their AI hardware and services. That's the problem with this AI gold rush in that everyone wants to sell you their AI service.
IMHO, it's a "bridge product". A first go at "tiles", their form of chiplets. Encouraging, but have a feeling you don't want the first round.
The same can be said about Intel's Arc GPU's, but again for the right price I'd be willing. Though each laptop was $1300, so I probably wouldn't.
 
I'm not sure what use Microsoft could find for these NPU's, but Linus Tech Tips did test it and while the GPU was faster it did use more power. The problem is that Microsoft is also looking to rent out their AI hardware and services. That's the problem with this AI gold rush in that everyone wants to sell you their AI service.
Lots of things it can do, but currently it does duck all. Microsoft hasn't implemented the patches for them to be used yet, the GPU draws a lot doing spell check and grammar check when the NPU could do it for peanuts, and the same with cleaning up Zoom/Teams video and audio streams would be nice if it also handled youtube upscaling.
Those are all things Android and Apple do with their AI/ML accelerators that are built into ARM and do it there for 1/10'th of what most GPU's pull doing it. Things like that really don't require a lot of power and have been ARM staples since late v7 or early v8.
Part of why I think Intel wants to get it's rentable units in the wild, Intel can put all the accelerators and ML, or AI toys on those chips but if Microsoft doesn't get around to coding for them than they do nothing, Linux added support in 6.6 but it's still not fully there yet.
Apple gets top-down and 100% integration and makes the absolute most of what they can squeeze out of the silicon they do put in there, if Microsoft is going to be too slow to adapt their software to the changing hardware, than it's Intel's job to take that control away and make it a hardware platform thing instead. Everything gets assigned to either the CPU or the GPU as per normal Windows business and Intel just goes OK NO you go to the GPU instead, you three to the NPU, and the rest to the CPU.
Microsoft is sure "trying" with their schedulers and such but really.... are they doing a good job?

It's expected that laptops will account for more than 90% of the consumer computer market in the next 5-7 years (not including tablets or mobile), the desktop continues to rapidly dwindle for the vast consumer user base and it saddens me.
 
Back
Top