10GigE Ports on Dell Switches

Joined
Sep 14, 2005
Messages
751
Hello we currently have:
2 Powerconnect 3548p POE switches for phones
3 Powerconnect 5548 switches for computers and printers
2 Powerconnect 6224 switches for servers

2 Powerconnect 5524 switches for iSCSI traffic which are isolated

Im looking to link these switches together using the 10GigE ports on the right sides of the switches as opposed to using a standard 1Gigabit cable, does anyone know where I can find these cables? I cant figure out exactly what they're called.
 
Distance is the big one, noise rejection is the other but not typically an issue. For short lentghts, copper is cheaper right now and typically no speed difference.
If you really are moving to 10GbE links, I think Fiber is the clear choice going forward.
 
If you're talking about SFP+, then use Twinax cables for short spans (<10m). They will cost you much less than 1 MM gbic.

If its SFP in a short span, then theres no reason not to use ethernet for trunk instead.
 
The OP doesn't really have any 10Gbe option with the switches he listed...

  • The Powerconnect 3548p does not support 10Gbe. Its two SFP ports are Gigabit only.
  • The Powerconnect 6224 supports 10Gbe only via optional cards installed into the stacking ports on the back. The four SFP slots on the front are Gigabit only.
  • The Powerconnect 5548 does, indeed, have two 10Gbe capable SFP+ ports standard on the front. But you don't have anything to connect them to on the other two switches...
  • The two Powerconnect 5524s also have SFP+ on the front, but as he noted they are "isolated".
I agree with Green91 - if you do have 10Gbe SFP+ slots and your distances are short (<60m) then use Twinax-DAC cables. You can get 5 or more of these cables for the price of one SFP+ optical interface.
 
The OP doesn't really have any 10Gbe option with the switches he listed...

  • The Powerconnect 3548p does not support 10Gbe. Its two SFP ports are Gigabit only.
  • The Powerconnect 6224 supports 10Gbe only via optional cards installed into the stacking ports on the back. The four SFP slots on the front are Gigabit only.
  • The Powerconnect 5548 does, indeed, have two 10Gbe capable SFP+ ports standard on the front. But you don't have anything to connect them to on the other two switches...
  • The two Powerconnect 5524s also have SFP+ on the front, but as he noted they are "isolated".
I agree with Green91 - if you do have 10Gbe SFP+ slots and your distances are short (<60m) then use Twinax-DAC cables. You can get 5 or more of these cables for the price of one SFP+ optical interface.

Thanks. I appreciate the info. Seems I'll have to make the best of the 3 5548s. Ill reorganize some traffic to make the best of them. And get the cards for the 6224s
 
Thanks. I appreciate the info. Seems I'll have to make the best of the 3 5548s. Ill reorganize some traffic to make the best of them. And get the cards for the 6224s

Funny story. We have some 5548s too and I bought some GBICs from Dell when I made the purchase for the switches. Come to find out I needed a couple more GBICs so I called up our sales rep and gave her the product number of the GBICs but she never could find the GBICs to sell me more. We went round and round for a couple of days with pictures and etc. She even got others to help her find the GBICs but no one could locate them.

Finally I got tired of waiting because we had to get the network operational so I ended up buying two FInisar 10Gb brand cards from CDWG. These are the exact cards that Dell rebrands.

Just an FYI..
 
finistar or however its spelled makes a lot of peoples SFP's



terminology note: :)
GBIC/MiniGBIC is the port form factor

SFP/SFP+ are the actual transceivers you buy

dont worry, its a super common 'error' even with other net eng's I work with
 
Finisar does make gbics for many companies including HP Procurve, Dell, Cisco, etc. But just because you buy a Finisar doesn't mean they are compatible. They lock down certain serial # ranges to the switch equipment. Procurve switches are particularly finicky about only allowing "Procurve" transceivers even if they are Finisar (non hp-branded ones)
 
Yes... far better latency and longer range over fiber.

The latency is definitely better for metro or even campus length links, but for short links (100 meters or less) the difference is marginal if even detectable. I run 3524s and 3548s and hit 1ms on 100m links across two switches.
I guess a case could be made for fiber if you were at or near capacity on the GbE and every microsecond mattered to reduce latency, but at that point something more than fiber is necessary. For short runs, I don't see the cost benefit of fiber over copper. Long runs, fiber wins hands down, and fiber is definitely the future, but short runs of copper is still cheaper and performance difference is negligible.

Another issue I just thought of is I'm not sure the 35xx switches will stack using SFP- dunno if that is an issue for the OP.
 
The latency is definitely better for metro or even campus length links, but for short links (100 meters or less) the difference is marginal if even detectable. I run 3524s and 3548s and hit 1ms on 100m links across two switches.
I guess a case could be made for fiber if you were at or near capacity on the GbE and every microsecond mattered to reduce latency, but at that point something more than fiber is necessary. For short runs, I don't see the cost benefit of fiber over copper. Long runs, fiber wins hands down, and fiber is definitely the future, but short runs of copper is still cheaper and performance difference is negligible.

Another issue I just thought of is I'm not sure the 35xx switches will stack using SFP- dunno if that is an issue for the OP.

Well yes there is no definable or honestly justifiable benefit to running fiber over copper at less than 100m if you are staying at 1g/e. However going 10g/e now that is another discussion not for this thread of course.
 
If you want to optimize your latency you should take a look at the Arista gear.

Also take note that when you perform a traceroute/ping its often the system cpu/mgmtplane which responds to your traceroute/ping packets and not the switchfabric/dataplane itself which gives that you will see a higher latency than it actually is for packets passing this particular device (unless its a device that uses its system cpu for everything or uses "features" like policy based routing where its not uncommon that those packets will be sent to the system cpu/mgmtplane for processing instead of being handled by the switchfabric/dataplane itself).
 
Back
Top