MITCH LOEBEL COMMENTS ON SCI                                        10.25.96
FEATURES AND COMMENTARY                                               HPCwire
=============================================================================

  San Diego, Calif. -- In response to HPCwire's recent three-part interview
with Dave Gustavson on SCI (see articles 10249, 10282 and 10316, "DAVE
GUSTAVSON ANSWERS QUESTIONS ABOUT SCI, PARTS I, II and III, 10.04.96,
10.11.96 and 10.18.96), B. Mitchell Loebel, strategic alliances and
partnering director for the PARALLEL Processing Connection and marketing
director for Minute-Tape International Corporation, has written the following
note and commentary.

---

  Whereas Dave chronicled the history and rationale behind SCI, I believe the
uninformed reader is not left with any high spots to summarize the issues. As
I mentioned, SCI has been plagued with misconceptions (which is why we are
improperly compared/contrasted with ATM and DASH). This is a real opportunity
to clear away the smoke.

  HPCwire Question: Briefly describe how SCI operates and contrast this with
the other principle networking technologies, such as ATM, HIPPI, etc.

  LOEBEL: One of the biggest problems that has faced SCI has been the fact
that it is not properly understood. Your question itself implies that it is
a networking technology, ".... such as ATM, HIPPI, etc." Networking is
generally used to move files for non-computational purposes, i.e. sound,
graphics, text (such as this note). SCI moves data as part of computational
cycles, i.e. loads and stores. Even your question below about moving large
packets of information implies non-computational file movement. The DASH
system from Stanford University has never gotten stuck with that
misconception. In truth, SCI should be contrasted with DASH. Here SCI has
several important advantages: 1. SCI is an IEEE standard, whereas DASH is/was
a research program, 2. SCI allows for certain optimizations in the cache
coherence protocol specifically because the directory of cache copies is
based in the caches themselves rather than in memory. I believe that the
discussions of SCI's very fast inter-nodal speeds (1 GByte/sec) have skewed
public understanding into not seeing it as a COMPUTATIONAL mechanism. Simply
put, the SCI protocol enables Distributed Shared Memory architectures.

  HPCwire Question: What companies presently offer SCI, and how does its cost
compare with competing technologies?

  LOEBEL: The PARALLEL Processing Connection has spawned a start-up called
Multi-Node Microsystems Corporation which is developing implementations of
the cache coherent interface between both the 604 and P6 buses and a cluster
of workstations/PC's tied together with SCI. (The PARALLEL Processing
Connection is an entrepreneurial association headquartered in Sunnyvale,
founded in late '89. Our raison detre is to be a spawning ground for new high
performance computer companies. We hold monthly meetings at Sun Microsystems
in Palo Alto and I am its Executive Director.)

  HPCwire Question: Do you feel SCI receives enough technical-media
attention? If not, why?

  LOEBEL: I am most concerned that the media gives SCI the wrong kind of
attention (see above). HPCwire could do the entire computer community a
great service by clarifying the issues.

  HPCwire Question: For what uses is SCI most and least suitable?

  LOEBEL: SCI has great value where large data sets are involved. In other
words,  100s or 1000s of processors needing to access data from a common
data set in order to provide many short computations would probably want to
access a common shared memory space. Alternatively, each processor would have
its own private memory space and the very large data set would have to be
moved into each such private memory - that would involve lots of network
traffic, and lots of memory for essentially redundant storage. Examples of
such applications might include 3D graphics and Virtual Reality rendering,
atmospheric modelling as is done at Naval Research Laboratory, and EDA.

  SCI's other benefit lies in the fact that the shared memory programming
model is acknowledged as being far more comfortable to programmers than is
explicit message passing. In fact, CC-NUMA which is enabled by SCI is
essentially an extension of SMP to very large numbers of processors.

  HPCwire Question: What can be done to facilitate end-users' understanding
of I/O and networking as memory transfers, as SCI requires?

  LOEBEL: Once again, it is important to understand that SCI involves access
to a remote node's memory controller (as in loads and stores) instead of an
I/O port.

  HPCwire Question: Please give a realistic assessment of SCI's future.

  LOEBEL: SCI offers a sophisticated solution to a COMPUTATIONAL problem
(at commodity prices). The problems are the misconceptions noted above and
the difficulty in obtaining access to memory/processor buses because most
system vendors keep them proprietary. In the case of Intel's P6 bus, it is
generally understood that Intel heavy handedly requires a license to be
obtained by a vendor of any device which will be plugged into that bus. Worse
even is the fact that entrepreneurial enterprises are precluded from
participating because such organizations aren't even on Intel's business
radar. And all licensees are required to fully disclose their product info to
Intel. As you can imagine, this very adversely affects an entrepreneur's
opportunity to obtain funding. Parenthetically (and thankfully), it appears
that IBM is not taking that route with its 60X bus. As a result, would-be SCI
vendors are forced to put their devices on Sun's S-bus or the PCI bus;
neither of these buses support cache coherence. Thus, only the high speed
interconnect aspect of SCI is widely used and the rest of it is ignored,
forgotten, and misunderstood. That will change in the very near future!

----------------

HPCwire welcomes reader comments and suggestions. Please send feedback to
Alan Beck, editor in chief, editor@hpcwire.tgc.com

************************************************************************** 
                      H P C w i r e   S P O N S O R S                       

       Product specifications and company information in this section are   
             available to both subscribers and non-subscribers.             
                                                                            
 936) Sony                  905) Maximum Strategy    937) Digital Equipment
 934) HP/Convex Tech. Center930) HNSX Supercomputers 932) Portland Group   
 921) Cray Research Inc.    902) IBM Corp.           915) Genias Software    
 909) Fujitsu                                        935) Silicon Graphics

**************************************************************************** 
Copyright 1996 HPCwire. Redistribution of this article is forbidden by 
law without the expressed written consent of the publisher. For a free trial 
subscription to HPCwire, send e-mail to trial@hpcwire.tgc.com.
935) Silicon Graphics **************************************************************************** Copyright 1996 HPCwire. Redistribution of this article is forbidden by law without the expressed written consent of the publisher. For a free trial subscription to HPCwire, send e-mail to trial@hpcwire.tgc.com. *************** а ************ ˰******************* H P C w i r e S P O N S O R S Product specifications and company information in this section are available to both subscribers and non-subscribers. 936) Sony 905) Maximum Strategy 937) Digital Equipment 934) HP/Convex Tech. Center930) HNSX Supercomputers 932) Portland Group 921) Cray Research Inc. 902) IBM Corp. 915) Genias Software 909) Fujitsu