[SLUG] Re: ECS K7S5A -- Interrupt 9/2, Interrupt Sharing, AMI BIOSes, etc...

From: Bryan J. Smith (b.j.smith@ieee.org)
Date: Sun Nov 28 2004 - 23:30:13 EST


On Sun, 2004-11-28 at 22:15, Chad Perrin wrote:
> I seem to recall, though, that often the AGP and the first PCI would
> share an interrupt in early implementations of the AGP bus.

Some video cards expect Interrupt 9, which is also the cascade to 2, and
various interrupt controllers of the chipset didn't like that, or only
allowed a card on PCI bus 0 to take control of it (AGP is typically PCI
bus 1 on 99.9% of mainboards). That is more of a chipset issue than an
AGP issue.

In other cases, BIOSes will map shared interrupts between any slots,
AGP, PCI, even ISA or AMR/ACM/ACR, etc... For example, my AMD/ViA
chipsets generally map the AGP and PCI slot 2 or 3 to the same IRQ.
Sharing IRQs is allowed by the PCI spec and all modern OSes.

The only issue I've seen is with pre-AMI BIOS 6.0 which took issues with
shared interrupts. But even then that was only an issue with an OS that
constantly chucked the CPU between Real86 and Virtual86/Protected386
mode. I.e., MS-DOS/386Enhanced Windows, including Windows 95, 98, ME.
They chuck the CPU back and forth between Real86 and 386 modes. No
other OS does, not even OS/2 which works very similar to 386Enhanced
Windows, but does not rely on a Real86 DOS underneath.

> I think that's what he was trying to point out in this case.

Understood. Just understand it's really a chipset issue than an AGP
issue. E.g., an analogy to saying Pentium mainboards slow down with
more than 64MB of RAM, which it's only limited to i430FX/VX/TX (Triton
I/III/IV) chipsets. You have to know all the details.

But AGP has always been a separate, segmented PCI bus -- at least
typically*1*. Technically it is an Intel Trade Secret*2* version (_not_
a PCI Standards Group, PCISG) of 32-bit PCI with a higher clock, and
then it has some "hacks" that make it act like a CPU. Inside story
short, Intel has _never_ designed a well-bridged PCI logic (let alone a
"true" system interconnect), outsourcing to Digital in years prior,
ServerWorks (fka Reliance Computer Corporation, RCC) in more recent
years.

> As I don't know much (anything, particularly) about the motherboard in
> question, I couldn't tell you if that's applicable in this case.

The SiS735 was a good chipset. It was really the first chipset that
used a 1GBps HyperTransport link between the northbridge and
southbridge, which were in the same BGA package. An ingenious ASIC
design IMHO.

> I may be misremembering. I so rarely have any use for the first PCI
> slot that it hasn't really come up for me, anyway. Usually, just for
> purposes of shortening the distance cables have to stretch and keeping
> cables from getting too snarled together, it's easier for me to have PCI
> cards slotted nearer the bottom.

Or the top if you either have a new BTX case or a "flipped" ATX case.
I've been advocating this for 3 years now, and the vendors finally
caught on.

[ I personally have a Lian Li PC-1200V for my dual Athlon MP2400+ ]

-- Bryan

*1* NOTE
WARNING: New Mainboards with both AGP and PCI-Express x16

Some new mainboards are sporting (and marketing themselves) with both an
AGP slot alongside a PCI-Express x16 slot. How? They simply put the
AGP slot on the "shared," legacy 0.125GBps PCI bus. So not only is it a
33MHz AGP slot (not even AGPx1), but it shares all traffic with all
other PCI cards. Talk about contention! If you buy one of these
boards, do _not_ use the AGP slot. ViA supposedly has a chipset with a
_true_, _bridge_ AGP slot on its own PCI bus as well as PCI-Express, but
I haven't seen it yet.

BTW, don't confuse a mainboard with both AGP and PCI-Express x16 with
one that is AGP but has a few PCI-Express x1 slots. It is possible to
add PCI-Express x1 slots in the southbridge, while the northbridge is
AGP (with a 0.5GBps or 1GBps HyperTransport interconnect from north to
south).

*2* NOTE
ATTITUDE: AGP, the Intel Trade Secret PCI Bastard

AGP is a non-standard PCI bastard that I am personally going to love
seeing die. The current AGPgarts in the Linux kernel suck, and suck
massively. In fact, unless you have an Athlon MP with a BIOS hack and a
newer Linux kernel (which turns its on-CPU, not on-chipset, AGPgart into
a "poor man's I/O MMU") or an A64/Opteron with its true I/O MMU, the
AGP's Direct in Memory Execution (D[i]ME) is the best way to introduce
non-coherency into the system interconnect (i.e., crashes). Only nVidia
has introduced kernel code with its own, Intel-licensed (hence
proprietary) AGPgart internal to its drivers which solve this massive
instability (although ATI has recently joined them recently).

PCI-Express is a _true_ PCISG standard.

-- 
Bryan J. Smith                                    b.j.smith@ieee.org 
-------------------------------------------------------------------- 
Subtotal Cost of Ownership (SCO) for Windows being less than Linux
Total Cost of Ownership (TCO) assumes experts for the former, costly
retraining for the latter, omitted "software assurance" costs in 
compatible desktop OS/apps for the former, no free/legacy reuse for
latter, and no basic security, patch or downtime comparison at all.

----------------------------------------------------------------------- This list is provided as an unmoderated internet service by Networked Knowledge Systems (NKS). Views and opinions expressed in messages posted are those of the author and do not necessarily reflect the official policy or position of NKS or any of its employees.



This archive was generated by hypermail 2.1.3 : Fri Aug 01 2014 - 18:35:14 EDT