Previous by Date Index by Date
Threaded Index
Next by Date


Previous by Thread Next by Thread


Re: Simultaneous User Flavors

In reply to Ann's question:

From the point of view of the computing and network resources consumed,
there are indeed two main models (a technically-based answer from a service
provider).

>1.  Each Simultaneous User represents a log-in session that is happening. 
>The session lasts as long as the User is logged in without having exited
> from the system.  If the session has no activity for a certain number of
>minutes, it is set to time out.  (One of the vendors called this model
>"number of ports.") By such a definition, the library would want more
>simultaneous users than in #2: 

This model is typical of a more traditional (telnet-based) database
application, where a user logs in and has a session (and resources to
support it) allocated from the time they log in, until the time they log
out (or are disconnected).  With this kind of application, concurrent
users is probably as good a measure as you'll get, whilst retaining a
reasonably simple subscription-based charging model. 

>2.  Each Simultaneous User is recorded for the actual seconds/moments that 
>information is being requested and retrieved.  Time spent viewing the
>material (which in a Web situation is likely to now be on the User's
>machine) does not count as Simultaneous Use time.  Under this definition,
>one would need far fewer licenses, presumably.

This model is more typical of web based applications, but is far from a
reliable measure of the number of connected users.  Most www browsers will
initiate multiple simultaneous connections, and can be configured according
to the users' wishes.  It is therefore impossible to correlate concurrent
server threads to users.  There is a fairly direct correlation between the
number of concurrent threads and the amount of server resource required to
service them, provided that the server has enough resource to avoid traffic
being queued.  This is, however, complicated by the fact that the slower the
user's connection (modem), the longer it will take to retrieve a given
amount of data.  On the server, this model has the convenience of being
relatively easy to set up and control, but it is not terribly flexible or
scalable.

It's worth remembering that a well-run high-performance web-based service
will have very low concurrency (simultaneous requests being processed) when
compared to a poor-performance service with the same number of users (just
because it takes longer to service requests, so they're around for longer).
Web traffic is very bursty by nature, and you really need to have enough
resource to service requests very quickly. Our sizing goal for servers and
networks is that the user's PC and modem are the bottleneck, rather than
anything we provide.

What about using IP addresses as well as threads?  This is not reliable
either.  The widespread use of proxy servers and firewalls (which - rightly
so - shield the true identity of the end use from the internet) means that
potentially hundreds or thousands of users can appear to have one IP
address.  In my company's case there are over 1500 users across Australia
who, to internet, have a single IP address.

Other options include constructing a pseudo-session with user logons and an
encrypted session key (transparent to the user) so that a mixture of both
models can be used (this is what we do - in some sense it offers the best of
both worlds, and opens other options such as pay-per-view).

Usage-based pricing (pay per view, search, or download, perhaps with a basic
subscription charge as well) is probably the only model that is close to
reflective of the ongoing costs of providing an electronic service (ignoring
the costs related to getting the material and service there in the first place).

In moving from a "per paper copy" pricing model (which is very easy to
understand and control, with cost and copyright models that have been around
for a very long time) to an online world that is comparatively much more
complicated, more difficult to control, and less well understood by both
sides of the table (with distribution costs less directly related to
circulation, and relatively little experience with costing models), it is
not terribly surprising that there is a fairly high level of inconsistency
between suppliers.  It is also true that librarians, publishers and people
such as us are not able to predict the uptake of the new online services
with anything like the same degree of accuracy as can be done for paper.  It
is also true that user expectations for the availability of online service
are somewhat different to those for paper.

At the end of the day the publisher and his service supplier (just like his
printer and freight company) must be adequately recompensed for the
resources they need to expend to make this material available online.  If
they cannot make a commercial success of an online service, it will
disappear, to the ultimate cost of all.

Regards,
Ken Robinson  


_________________________________________________________________
Ken Robinson                  | Ph    +61-2-410-4612 
Senior Solutions Consultant   | Fax   +61-2-411-8603
Online and Network Services   | Email kenr@fujitsu.com.au
Fujitsu Australia Limited     | Mobile 014-998-334
475 Victoria Avenue,          |
Chatswood.  NSW. 2067         |
Australia                     |
_________________________________________________________________




http://www.library.yale.edu/liblicense
© 1996, 1997 Yale University Library
Please read our Disclaimer
E-mail us with feedback