JMLC Invited Talks

The Essence of Programming Languages

Niklaus Wirth (ETH Zürich)

Abstract

Only a few years after the invention of the first programming languages, the subject of language design florished, and a whole flurry of languages appeared. Soon programmers had to make their choices. This brings us to the question of how languages were selected, and which were the criteria of selection. Are there any general criteria of quality? What is truly essential? In spite of the convergence to a few, wide-spread, popular languages in recent years, these questions remain relevant, as the search for a "better" language continues among programmers. Do technical merits and style still matter, or is the support of a vendor the principal or even only concern?

CV

Niklaus Wirth was born in 1934 in Winterthur, Switzerland. After graduating from ETH Zurich in electrical engineering in 1959 he completed his master's degree at Laval University in Quebec, Canada, in 1960 and went on to complete his Ph.D. in programming languages in 1963 under professor H. D. Huskey at the University of California at Berkeley. Wirth served as an assistant professor at the newly founded computer science department at Stanford University from 1963 to 1967.
During this time he became intensely involved with questions regarding programming-language development.
In 1967 Wirth returned as a professor to Zurich, first to the university, then in 1968 to ETH, where he co-founded the Department of Computer Science. The development of computer science as an academic major and as a research field at ETH was and is decidedly influenced by Wirth. In March 1999 Wirth retired as professor at ETH.

back

Safe Code: It's Not Just For Applets Anymore

Michael Franz (University of California, Irvine)

Abstract

Security guarantees for mobile code are easier to reason about at the source-language level. However, the two major mobile code techniques, bytecode and proof-carrying code and its variants, take a low-level view of mobile code. We argue that the large semantic gap between high-level source and low-level mobile code creates inefficiencies both in reasoning about security properties of the code, as well as its performance.

We have invented an alternative mobile code representation that encodes programs at a higher level. It is much easier to transport source-level semantics in our encoding than in the prevalent low-level approaches.
Our encoding also provides safety by construction, as illegal programs cannot even be expressed in it. Other advantages of our encoding are an excellent compression factor, and the ability to safely transport performance-enhancing annotations.

Towards the end of my talk, I will outline my vision of the "secure desktop computer system" of the future. We are in the process of creating such a system as a reference architecture, in which compilers play an essential part.

CV

Prof. Michael Franz leads a research group of 14 Ph.D. students and three Post-Doctoral fellows at the University of California, Irvine. His current research focuses primarily on security and efficiency aspects of mobile code. Other research interests include code compression, dynamic compilation, compiling for low power use, and programming languages and architectures for component-based software construction.
Franz received a Dr. sc. techn. degree in Computer Science and a Dipl. Informatik-Ing. degree, both from the Swiss Federal Institute of Technology, ETH Zurich. Technology, ETH Zurich.

back

Computing with Distributed Resources

Jayadev Misra  (University of Texas at Austin) 

Abstract

The metaphor "Network is the Computer" has received much attention lately. We may view the network as a repository of data, typically stored in distributed objects, which resembles the primary (and secondary) storage of a traditional computer. The underlying instruction set for the network computer consists of method calls on these objects; the effect of a method call is to modify the state of the object (similar to a "store" instruction in a traditional computer) and/or return some value (similar to a "load" instruction).

This talk describes a small set of control structures for such a computer. There are several differences between a network and a sequential (von Neumann) computer that can not be ignored. First, and foremost, is that multiple programs may access/modify an object concurrently; so, computation is necessarily distributed. Additional significant issues are costs of remote data access, failures of network nodes and communication links, and the security issues inherent in distributed computing. In this talk, we show that our proposed control structures address these issues, and it is fairly straightforward to design a host of distributed applications.

CV

Jayadev Misra is a professor and holder of the Schlumberger Centennial chair in Computer Sciences at the University of Texas at Austin. He received his Ph.D. in 1972 from the Johns Hopkins University. He has been a faculty member at the University of Texas at Austin since 1974, except for a sabbatical during 1983-1984 spent at Stanford University.

His research interests are in the area of concurrent programming, with emphasis on rigorous methods to improve the programming process. He has been the past editor of several journals in this area, including:
Computing Surveys, Journal of the ACM, Information Processing Letters and the Formal Aspects of Computing. He is the author of two books Parallel Program Design: A Foundation, Addison-Wesley, 1988, co-authored with Mani Chandy, and A Discipline of Multiprogramming, Springer-Verlag, 2001.

Misra is a fellow of ACM and IEEE; he held the Guggenheim fellowship during 1988-1989. He was the Strachey lecturer at Oxford University in 1996, and he held the Belgian FNRS International Chair of Computer Science in 1990.

back

 

Euro-Par Invited Talks

Databases, Web Services, and Grid Computing – Standards and Directions

Stefan Dessloch (University of Kaiserslautern)

Abstract

Over the last two years, web services have emerged as a key technology for distributed object computing, promising significant advantages for overcoming interoperability and heterogeneity problems in  large scale, distributed environments. Major vendors have started to incorporate web service technology in their database and middleware products, and companies are starting to exploit the technology in information integration, EAI, and B2B-integration architectures. In the area of Grid computing, which aims at providing a distributed computing architecture and infrastructure for science and engineering, web services have become an important piece of the puzzle by providing so-called Grid Services that help realizing the goal of virtual organizations to coordinate resource sharing and problem solving tasks. The adequate support of data management functionality, or data-oriented services in general within this architectural setting is undoubtedly a key requirement and a number of approaches have been proposed both by research and industry to address the related problems. This talk will give an overview of recent developments in the areas outlined above and discuss important standardization activities as well as trends and directions in industry and research.

CV

Stefan Deßloch is a professor at the University of Kaiserslautern, Germany, where he is heading the research group “Heterogeneous Information Systems”. Major areas of interest include information integration, database-oriented middleware technologies, databases and the web, XML and databases, as well as extensible and object-relational database management systems. In these areas, he has published many conference papers, journal articles, and book chapters, as well as conference tutorials and holds a number of patents.

Stefan earned his doctoral degree at the University of Kaiserslautern in 1993. Until 2002, he was a database architect at the IBM Database Technology Institute in San Jose, California, where he was responsible for IBM's database-related standardization efforts in the areas of SQL, XML, and Java. Moreover,  he conducted projects in the areas of  information integration architecture and programming models, database & application server integration, and object-relational database extensions.

back

Ibis: A Java-based grid programming environment

Henri E. Bal

(Department of Computer Science, Vrije Universiteit, Amsterdam, The Netherlands)
 

Abstract

Ibis is an ongoing research project in which we are building a Java-based Grid programming environment for distributed supercomputing applications. Java’s high portability allows parallel applications to run on a heterogeneous grid without requiring porting or recompilation. A major problem in using Java for high-performance computing, however, is the inferior performance and limited expressiveness of Java’s Remote Method Invocation (RMI). Earlier projects (e.g., Manta) solved the performance problem, but at the cost of using a runtime system written in native code, which gives up Java’s high portability. The philosophy behind Ibis is to try to obtain good performance without using any native code, but allow native solutions as special-case optimizations. For example, a Grid application developed with Ibis can use a pure-Java RMI implementation over TCP/IP that will run ”everywhere”; if the application runs on, say, a Myrinet cluster, Ibis can load a more efficient RMI implementation for Myrinet that partially uses native code. The pure-Java implementation of Ibis does several optimizations, using bytecode rewriting. For example, it boosts RMI performance by avoiding the high overhead of runtime type inspection that current RMI implementations have. The special-case implementations do more aggressive optimizations, even allowing zero-copy communication in certain cases.

The Ibis programming environment consists of a communication runtime system with a well-defined interface and a range of communication paradigms implemented on top of this interface, including RMI, object replication, group communication, and collective communication, all integrated cleanly into Java. Ibis has also been used to implement Satin, which is a Cilk-like wide-area divide-and-conquer system in Java. Experiments have been performanced on two Grid test beds, the Dutch DAS-2 system and the (highly heterogeneous) European GridLab test bed. Our current research on Ibis focuses on fault tolerance and on heterogeneous networks.

CV

ftp://ftp.cs.vu.nl/pub/bal/cv.html

back

 

Common Euro-Par and JMLC Invited Talks

The Verifying Compiler: A Grand Challenge For Computing Research

C.A.R. Hoare (Microsoft Research, Oxford University)

Abstract

I propose a set of criteria which distinguish a grand challenge in science or engineering from the many other kinds of short-term or long-term research problems that engage the interest of scientists and engineers. As an example drawn from Computer Science, I revive an old challenge: the construction and application of a verifying compiler that guarantees correctness of a program before running it.

CV

Tony Hoare first encountered Bob Floyd's suggestion for a verifying compiler in 1968, when he took up his post as Professor of Computing Science at the Queen's University, Belfast. Most of his academic research since that time has been directed towards the problems of program specification and verification. On joining Microsoft Research in 1999, he realised that there was an increasing need for software verification, as well as an increasing capability to meet that need. The achievement will be realised only by a concerted long-term and international endeavour, of the kind that in other sciences is called a Grand Challenge

back

Evolving a Multi-Language Object-Oriented Framework: Lessons from .NET

Jim Miller (Microsoft Corporation)

Abstract

In 2001 Microsoft shipped the first public version of its Common Language Runtime (CLR) and the associated object-oriented .NET Framework. This Framework was designed for use by multiple languages through adherence to a Common Language Specification (CLS). The CLR, the CLS, and the basic level of the .NET Framework are all part of International Standard ISO/IEC 23271. Over 20 programming languages have been implemented on top of the CLR, all providing access to the same .Net Framework, and over 20,000,000 copies have been downloaded since its initial release.

As a commercial software vendor, Microsoft is deeply concerned with evolving this system. Innovation is required to address new needs, new ideas, and new applications. But backwards compatibility is equally important to give existing customers the confidence that they can build on a stable base even as it evolves over time. This is a hard problem in general, it is made harder by the common use of virtual methods and public state, and harder still by a desire to make the programming model simple.

This talk will describe the architectural ramifications of combining ease-of-use with system evolution and modularity. These ramifications extend widely throughout the system infrastructure, ranging from the underlying binding mechanism of the virtual machine, through program language syntax extensions, and into the programming environment.

CV

Jim Miller holds a PhD in Computer Science from MIT (Parallel Processing under Bert Halstead), and served on the faculty at Brandeis University as well as on the research staff at MIT (both the AI Lab and the Lab for Computer Science). He has been on the research staff at Digital Equipment Corporation and the Open Software Foundation. Before joining
Microsoft, he was on the senior management team of the World Wide Web Consortium, reporting to Tim Berners-Lee and in charge of work on security, electronic commerce, child protection, privacy protection, accessibility, and intellectual property protection. He joined Microsoft in 1998, leading the program management team for the kernel of the .NET Common Language Runtime (CLR) and currently is Software Architect for the CLR. His responsibility includes garbage collection, metadata definition and file formats, intermediate language (IL) definition, IL-to-native code compilation, and remote objects. He also serves as editor for ECMA TC39/TG3, which is charged with creating an international standard for a Common Language Infrastructure. His current work involves designing an architecture to allow innovation in the core of the CLR and the managed Frameworks while preserving backward compatibilty.

back

 

JMLC Tutorials

C# - The modular language for the 2000s

Judith Bishop (University of Pretoria) and Nigel Horspool (University of Victoria)

Abstract

C# is seen as Microsoft's alternative to Java, but it is much more than that. This tutorial looks deeper into how object-orientation has developed over the past seven years, and at the new features that C# offers for cleaner and more secure programming. These include a uniform treatment of types, properties, collections, indexers, events and delegates, among others. As with any new language, it is important to get into the ethos of it, and we look at how C#'s features interact, and how new orderings of topics for learning and teaching the language are needed to fully exploit its advantages. Several novel examples and case studies are used to illustrate these points.

C# is firmly embedded in the .NET framework with its common type system and we examine the advantages this arrangement brings to cross-language programming, showing examples of linking C# and other languages, in particular Java. Different platforms for C# are covered and demonstrated, including the public domain ECMA standard-based Rotor, which is available for Unix and Mac OS X as well as Windows. To support cross-platform GUIs we have developed the small-footprint, XML-based GUI system, Views, and we show how Views can make for rapid program development without the resource intensive backup of Windows. Debugging C# programs, and the use of debuggers concludes the tutorial.

This tutorial will suit computing professionals and educators who are familiar with Java and who would like to learn what C# has to offer as a new language in a wider context than just a Microsoft world.

CV

Judith Bishop is a Professor of Computer Science at the University of Pretoria, South Africa, and author of twelve books on languages including Java. Nigel Horspool is Professor and Head of Department of Computer Science, University of Victoria, Canada. and the author of two books on C and Unix. The speakers are the immediate past chairman and secretary of IFIP WG2.4 (Software Implementation Technology) and are on several program committees related to component development, languages, and compilers. They are joint authors of an upcoming book on C# to be published by Addison-Wesley in June 2003, and are holders of a Microsoft Research Rotor Grant for work on platform independent GUIs.

back

Design by Contract and the Eiffel method

Bertrand Meyer

(ETH Zürich and Eiffel Software)

Abstract

A number of O-O design principles, chief of them Design by Contract, can lead to a better software process and products when supported by appropriate tools and notations. The Eiffel approach, including a method, a language and a set of tools, attempts to provide such a framework for quality; it covers the full lifecycle, from requirements analysis to implementation, debugging and maintenance. I will present the key aspects, emphasizing their practical effects on the software process, the role of the development environment, and the pervasive role of contracts.

Trusted components and attempts at proofs

Bertrand Meyer

(ETH Zürich and Eiffel Software)

Abstract

"Trusted components" are reusable software elements with guaranteeable quality attributes. The presentation summarizes the justification for this concept, summarize current achievements, and outline techniques for developing actual mathematical proofs of a specific kind of components: classes in an object-oriented language.

CV

Bertrand Meyer is interested in object technology and systematic software construction techniques.

back

The Microsoft .NET Technology

Wolfgang Beer, Dietrich Birngruber, Hanspeter Mössenböck, Albrecht Wöß

(University of Linz, Institute of Practical Computer Science)

Abstract

.NET is Microsoft's new software platform that significantly improves the way how desktop applications and Web-based software are developed under Windows. It is similar to the Java environment but in some respects more advanced. Like Java, .NET comes with a run time system (the Common Language Runtime). It provides type safety, garbage collection, versioning and full interoperability between more than 20 programming languages. There is also a new programming language - called C# - that is similar to Java but more convenient to use. .NET is particularly strong in the development of distributed Web Services und dynamic Web pages using ASP.NET.

This tutorial is an introduction to the complete .NET technology. It shows the general concepts of .NET, demonstrates their use for practical applications and compares them to similar concepts, e.g. in Java. With numerous examples and hints to the literature the participants will be enabled to develop .NET applications themselves.

CV

Prof. Dr. Hanspeter Mössenböck is a full professor at the University of Linz, Austria. He is interested in programming languages, compilers and software engineering especially in the area of object-oriented and component-based software development.

Dr. Dietrich Birngruber has a PhD from the University of Linz. He is interested in component technologies such as EJB, COM+, CORBA and of course .NET. His current research is in the area of software composition languages.

Dipl.-Ing. Wolfgang Beer is a research assistant at the University of Linz. He is working on mobile and context-dependent applications and in data visualization. Currently he is doing his PhD in a project that deals with context frameworks.

Dipl.-Ing. Albrecht Wöß is a research assistant at the University of Linz working in object-oriented software development and compiler construction. After studying in the US for a year he worked in an eLearning project.

The speakers are the authors of the book

       Die .NET-Technologie, dpunkt.verlag, 2003

       http://dotnet.jku.at

which is currently translated to English and published by Addison Wesley.

back

 

Euro-Par Tutorials

Carrier Grade Linux Platforms: Characteristics and Development Efforts

Ibrahim Haddad (Ericsson Research)

Abstract


The interest in clustering/distributed systems from the telecom world comes from the fact that we can address the availability and scaled performance using cost-effective hardware and software while maintaining telecom-grade characteristics. These characteristics include linear scalability using loosely coupled processors, continuous service availability, high reliability, superior performance, ease and completeness of management.

On the other hand, the interest from the telecom world in Linux as the operating system for telecom platforms has many reasons. Linux is open source, thus, we have access to the source code to rapidly fix shortcomings and bugs in the kernel whenever required or add any needed hooks needed by the middleware. In addition, it supports multiple architectures (openness in hardware) and does not enforce dependency on a single hardware/processor vendor. Linux also supports many programming languages (openness in software). Furthermore, it supports third party software and APIs. Moreover, it supports new networking protocols such as IPv6; typically, new IP features are introduced between 6 to 18 months later on Solaris compared to Linux.

This tutorial presentation aims at presenting where Linux Clusters stand today in the telecom industry. We will review the features/characteristics of carrier grade systems and see if and how Linux Clusters meet the telecom requirements. We will discuss what the industry is doing to promote, design, and develop Carrier Grade Linux and expose our operation experience with Carrier Grade Linux.

CV

Ibrahim Haddad is a researcher at the Ericsson Corporate Unit of Research in Montreal, Canada, involved with the system architecture of third generation wireless IP networks. He is responsible for guiding open source contributions from Ericsson that promotes the use of Linux in telecommunications within the Open Source Development Lab framework. Ibrahim is a contributing Editor to the Linux Journal. He delivered several talks at universities, IEEE and ACM conferences, and Open Source forums. He received his Bachelor and Master degrees in Computer Science from the Lebanese American University, charted by the University of the State of New York. He is currently a Dr. Sc. Candidate at Concordia University in Montreal, Canada, where he received both the J. W. McConnell Memorial Graduate Fellowships and the Concordia University 25th Anniversary Fellowship.

back

Project JXTA: An Open P2P Platform Architecture

Bernard Traversat (Sun Microsystems)

Abstract

Project JXTA is the industry leading open-source project that defines a set of protocols for ad hoc, pervasive, peer-to-peer computing. The Project JXTA protocols establish a virtual network overlay on top of the Internet and non-IP networks, allowing peers to directly interact and organize independently of their network location (firewalls or NATs). The Project JXTA virtual network employs five abstractions that virtualize existing computing networks. First, Project JXTA uses a uniform peer addressing scheme that spans the entire JXTA network. Every peer is uniquely identified by a Peer ID, and its associated peer endpoint. Second, Project JXTA lets peers dynamically self-organize into protected domains called peergroups. A peer can belong to as many peergroups as it wishes. Users, service providers, and network administrators can create peergroups to scope and control peer interactions. Peergroups virtualize the notion of firewalls, subdividing the network into secure regions without consideration for physical network boundaries. Third, Project JXTA uses advertisements (XML documents) to advertise all network resources (peer, peergroup, endpoint, service, content). Advertisements provide a uniform way to publish and discover network resources. Each advertisement has a lifetime that specifies the lifetime of its associated resource. Lifetimes permit obsolete resources to decay, and enable the self-healing of networks without centralized control. Fourth, Project JXTA defines a universal binding mechanism, called the resolver, to perform all resolution operations required in a distributed system. These include resolving a peer name into an IP address (DNS), binding an IP socket to a port, or locating a service (Directory service). In JXTA, all resolution operations are implemented as the simple discovery or search of one or more advertisements. 
Finally, Project JXTA introduces the concept of non-localized communication channels called pipes. Pipes enable services and applications to advertise communication access points represented by pipe advertisements. Pipes enable services to dynamically connect to each other to construct complex services. The input and output ends of a pipe are dynamically bound to physical peers at runtime.

The tutorial will present a detail description of the JXTA virtual network concepts. We will cover the concepts of advertisements, rendezvous, pipes, secure peergroups  and the decentralized certificate security model used by JXTA We will discuss the available implementations of the JXTA protocols in Java (J2SE and J2ME) and C. We will introduce the JXTA Shell to allow interactive access to the JXTA network and show how to write new Shell commands. We will cover the Java and C API bindings using tutorial examples and sample code to illustrate how to write JXTA applications. We will show how to convert a legacy network service to use JXTA. Finally, we will discuss the JXTA open-source community and future directions of the JXTA technology.

CV

Bernard Traversat has been one of the lead senior architect of Project JXTA, at Sun Microsystems, since the project started. He is leading the Sun core engineering team, evangilizing JXTA to the open source community and Sun customers and partners. Previously, he led Sun's effort in pervasive computing for small consumer devices, and was a lead developer on the SunCluster product. Prior to that, he worked at the NASA Ames Research Center on distributed-memory operating systems for massively parallel supercomputers. He is the co-authored of the initial MPI-IO extension specification. He received his Ph.D. degree from the Florida State University, and M.S. in Applied Math from the University of Lyon (France).

back

Pervasive Computing

Alois Ferscha (University Linz)

Abstract

The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it“ was Mark Weiser’s central statement in his seminal paper [Weis 91] in Scientific American in 1991. His conjecture, that  “we are trying to conceive a new way of thinking about computers in the world, one that takes into account the natural human environment and allows the computers themselves to vanish into the background” has fertilized the embedding of ubiquitous computing technology into a physical environment which responds to people’s needs and actions. Most of the services delivered through such a “technology-rich” environment are services adapted to context, particularly to the person, the time and the place of their use. Along Weiser’s vision, it is expected that context-aware services will evolve, enabled by wirelessly ad-hoc networked, mobile, autonomous special purpose computing devices (i.e. “smart appliances”), providing largely invisible support for tasks performed by users. It is expected that services with explicit user input and output will be replaced by a computing landscape sensing the physical world via a huge variety of sensors, and controlling it via a manifold of actuators in such a way that it becomes merged with the virtual world. Applications and services will have to be greatly based on the notion of context and knowledge, will have to cope with highly dynamic environments and changing resources, and will need to evolve towards a more implicit and proactive interaction with users.

 A second historical vision impacting the evolution of  pervasive computing claimed for an intuitive, unobtrusive and distraction free interaction with technology-rich environments. In an attempt of bringing interaction “back to the real world” after an era of keyboard and screen interaction with computers, computers started to be understood as secondary artefacts, embedded and operating in the background, whereas the set of all physical objects present in the environment were started to be understood as the primary artefacts, the “interface”. Instead of interacting with digital data via keyboard and screen, physical interaction with digital data, i.e. interaction by manipulating physical artefacts via “graspable” or “tangible” interfaces, was proposed. Tangible interface research has evolved, where physical artefacts are considered as both (i) representations and (ii) controls for digital information. A physical object thus represents information while at the same time acts as a control for directly manipulating that information or underlying associations. With this seamless integration of representation and control into a physical artefact also input and output device fall together.

 In this tutorial I will give a state-of-the-art survey on the field of pervasive computing. Starting from a historical view and a discussion of technolgoy trends, I will identify the main concepts, approaches and methods of pervasive computing, will identify the most topical research challenges, and work out the potentials with respect to industrial, commercial, societal and personal use applications.  Technically, I will demonstrate how recent technological advances like in sub-micron and system-on-a-chip designs, wireless communications, micro-electro-mechanical systems, materials sciences, etc. have accelerated the development of low-cost, low-power, multifunctional, autonomous, embedded systems that are tiny in size and communicate untethered in short distances. Evidently, personal computing is giving way to a pervasive computing landscape populated by vast ubiquitous networks of wirelessly ad-hoc networked, mobile, wearable, autonomous, special purpose computing and communication appliances (“Smart Things”) and environments (“Smart Spaces”). Interaction, and consequently cooperation, in such environments is usually done implicitly (and invisibly) via a variety of sensors on the input side, and actuators on the output side.

 This paradigm shift towards pervasive computing poses serious challenges to the conceptual architectures of coordination, and the related engineering disciplines in computer science. In my presentation I will reflect on the upcoming “theory of coordination”, the emerging pervasive and ubiquitous “cooperative” computing challenges and potentials, and in particular on the engineering issues associated with the provision of context aware cooperative systems. To ease the development of cooperative applications and services that have to be based on the notion of context we have built “context frameworks”, i.e. networked embedded software systems able to (i) describe, gather, transform, interpret and disseminate context information within ad-hoc, highly dynamic and frequently changing computing environments, (ii) dynamically discover, inspect, compose and aggregate software components in order to identify, control and extend context, as well as to overcome context barriers (like time, position, user preference, etc.), (iii) allow for dynamic interactions among components in a scalable fashion and satisfying requirements such as fidelity, QoS, fault-tolerance, reliability, safety and security, (iv) integrate heterogeneous computing environments and devices with different functionality, ability, form factor, size and limited resources wrt. processing power, memory size, communication, I/O capabilities, etc., (v) support the adaptation of novel forms of sensitive, tangible, situative, non-distracting user interfaces not limited to particular modes of interaction or input- output devices, and (vi) to allow for the implementation of learning, self-adaptive, plan-oriented, intelligent system behaviour.

 Finally I will present and discuss system prototypes (“Wireless Campus”, “Digital Aura”, “WebWall”, “SmartLuggage”, “DigiScope”) that we have developed at the University of Linz to demonstrate pervasive computing systems in operation, to investigate novel forms of interaction and cooperation, and to envision their prospective potentials in industrial, business and commercial settings.

CV

Alois Ferscha received a PhD in business informatics (1990) from the University of Vienna, Austria. From 1986 through 2000 he was with the Department of Applied Computer Science at the University of Vienna at the levels of assistant and associate professor. In 2000 he joined the University of Linz as full professor where he is now head of the department for Practical Computer Science.

The background of Prof. Ferscha is on topics related to parallel and distributed computing, like e.g. Computer Aided Parallel Software Engineering, Performance Oriented Distributed/Parallel Program Development, Parallel and Distributed Discrete Event Simulation and Performance Modeling/Analysis of. Currently he is focussed on Pervasive Computing, Embedded Software Systems, Wireless Communication, Multiuser Cooperation, Distributed Interaction and Distributed Interactive Simulation. He is responsible for the research “Topical Focus”  on Pervasive Computing at the University of Linz.

back

A Family of Multimedia Representation Standards: MPEG-4/7/21

Fernando Pereira (Universidade Técnica de Lisboa)
Hermann Hellwagner (University Klagenfurt)

Abstract

The ISO/MPEG standardization committee has been responsible for the successful MPEG-1 and MPEG-2 standards that have given rise to widely adopted commercial products and services, such as Video-CD, DVD, digital television, digital audio broadcasting (DAB) and MP3 (MPEG-1 Audio layer 3) players and recorders. More recently, the MPEG-4 standard is aimed to define an audiovisual coding standard to address the emerging needs of the communication, interactive and broadcasting service models as well as of the mixed service models resulting from their technological convergence. The MPEG-4 object-based representation approach where a scene is modeled as a composition of objects, both natural and synthetic, with which the user may interact, is at the heart of the MPEG-4 technology. With this new coding approach, the MPEG-4 standard opens new frontiers in the way users will play with, create, re-use, access and consume audiovisual content.
Following the same vision underpinning MPEG-4, MPEG initiated another standardization project addressing the problem of describing multimedia content by metadata to allow the quick and efficient searching, processing and filtering of various types of multimedia material: MPEG-7. The need for a powerful solution for quickly and efficiently identifying, searching, filtering, etc., various types of multimedia content of interest to the user, human or machine, using also non text-based technologies, directly follows from the urge to efficiently use the available multimedia content and the difficulty of doing so.
Following the development of the standards mentioned above, MPEG acknowledged the lack of a “big picture” describing how the various elements building the infrastructure for the deployment of multimedia applications relate to each other or even if there are missing open standard specifications for some of these elements. To address this problem, MPEG started the MPEG-21 project, formally called multimedia framework, with the aim to understand if and how these various elements fit together, and to discuss which new standards may be required, if gaps in the infrastructure exist. Once this work has been carried out, new standards will be developed for the missing elements with the involvement of other bodies, where appropriate, and finally the existing and novel standards will be integrated in the MPEG-21 multimedia framework. The MPEG-21 vision is thus to define an open multimedia framework to enable the transparent and augmented delivery and consumption of multimedia resources across a wide range of networks and devices used by different communities. The MPEG-21 multimedia framework will identify and define the key elements needed to support the multimedia value and delivery chain, as well as the relationships between and the operations supported by them. This open framework guarantees all content creators and service providers equal opportunities in the MPEG-21 enabled open market. This will also be to the benefit of the content consumers who get access to a large variety of contents in an interoperable manner.
This tutorial will address the evolution and current status in terms of MPEG technologies and standards as well as the most relevant emerging developments.

CV

Fernando Pereira

Fernando Pereira was born in Vermelha, Portugal in October 1962. He was graduated in Electrical and Computers Engineering by Instituto Superior Técnico (IST), Universidade Técnica de Lisboa, Portugal, in 1985. He received the M.Sc. and Ph.D. degrees in Electrical and Computers Engineering from IST, in 1988 and 1991, respectively.
He is currently Professor at the Electrical and Computers Engineering Department of IST. He is responsible for the participation of IST in many national and international research projects. He is a member of the Editorial Board and Area Editor on Image/Video Compression of the Signal Processing: Image Communication Journal and an Associate Editor of IEEE Transactions of Circuits and Systems for Video Technology, IEEE Transactions on Image Processing, and IEEE Transactions on Multimedia. He is a member of the Scientific and Program Committees of tens of international conferences and workshops. He has contributed more than 130 papers to journals and international conferences. He won the 1990 Portuguese IBM Award and an ISO Award for Outstanding Technical Contribution for his participation in the development of the MPEG-4 Visual standard, in October 1998.
He has been participating in the work of ISO/MPEG for many years, notably as the head of the Portuguese delegation, chairman of the MPEG Requirements group, and chairing many Ad Hoc Groups related to the MPEG-4 and MPEG-7 standards.
His current areas of interest are video analysis, processing, coding and description, and multimedia interactive services.

Hermann Hellwagner

Hermann Hellwagner received his Dipl.-Ing. degree (in Informatics) and Ph.D. degree (Dr. techn.) in 1983 and 1988, respectively, both from the University Linz, Austria. From 1989 to 1994, he was senior researcher and team/project manager at Siemens AG, Corporate R&D, Munich, Germany. From 1995 to 1998, he was associate professor of parallel computer architecture at Technische Universität München (TUM). Since late 1998, he has been a full professor of computer science at ITEC at the University Klagenfurt, Austria.
Dr. Hellwagner is (co-)editor or (co-)author of some 50 publications in the areas of parallel computer architecture, parallel programming, and multimedia communications and adaptation. From 1999 to 2002, he was Subject Area Editor for Computer Architecture of Elsevier’s Journal of Systems Architecture (JSA). He initiated two annual international workshop series (HIPS and SCI-Europe), and has served as a program committee member for several international conferences and a reviewer for several journals and many conferences. He is currently co-chair of the conference Euro-Par 2003, to be held at the University Klagenfurt in August 2003. He is a member of the IEEE, GI and OCG as well as the head of the Austrian delegation (HoD) to MPEG.
His current areas of interest are distributed multimedia systems, multimedia communications, and Internet QoS. Current projects are on digital video communication, a streaming protocol supporting media adaptation, and bitstream description techniques within the MPEG-21 Digital Item Adaptation (DIA) standardization effort.

back

Grid Computing with Jini

Mark Baker (University of Portsmouth) and  
Zoltan Juhasz (University of Veszprém)

Abstract

Java is now frequently used for developing all manner of distributed and parallel scientific and engineering applications. Java has become particularly popular for developing the middleware necessary to support parallel and distributed applications. For example Jini and JXTA from Sun Microsystems can both provide all the services necessary to support distributed applications on a local or wide area basis. It is vital that these, and other Java-based systems, are able interact and inter-operate with the current and emerging generation of wide area infrastructures, such as the Open Grid Service Architecture (OGSA) and Web Services.

In this tutorial, we discuss and demonstrate the current and emerging trends in the use of Java for wide-area distributed computing. In particular, we cover Java technologies in the areas of environments, architectures, middleware, and interaction with the emerging OGSA. Overall, we provide a complete overview of Jini-based technologies for distributed computing and how these can interact with emerging Grid environments. The tutorial will also cover advanced topics such as the distributed security and dynamic configuration features of the new version of Jini, which will be released in the near future. Throughout the tutorial, a variety of example programs and systems will be used to demonstrate the issues and techniques discussed.

This tutorial aims to provide a wide-ranging overview of Jini-based technologies for distributed computing. In particular, we concentrate on Jini, Web Services, and the emerging OGSA. The tutorial is intended to help general users, researchers, and programmers, in understanding the operation and possible uses of Jini technologies for their wide-area infrastructure and applications.

CV

Dr Mark Baker
University of Portsmouth, UK

At the University of Portsmouth, Mark is a Reader in Distributed Systems and runs the Distributed Systems Group (DSG), which is actively engaged research in parallel and distributed computing, more particularly distributed Java, Cluster Computing and Grid technologies. Mark has held various permanent and visiting academic posts in the universities of Cardiff, Edinburgh, Southampton in the UK, and Syracuse University and Oak Ridge National Laboratory.

Mark is co-founder and co-chair of the IEEE Computer Society's Task Force on Cluster Computing (TFCC). In 2002, Mark is on the programme committee and involved with the organisation of over thirteen international workshops, symposiums and conferences. In 2002, Mark was co-chair of "Cluster 2002", the TFCCs international flagship conference, which was held in Chicago and hosted by Argonne National Laboratory (ANL) and the National Centre for Supercomputing Applications (NCSA).

Mark has presented tutorials at many internationals conferences, including Berlin, Newport Beach, Manchester, Chemnitz, Dallas, Munich, Pittsburgh, Bangalore, Cochin and LA. The topics taught have ranged from Java and Jini to cluster computing and the Grid. A full list of Mark's recent tutorials can be found at http://dsg.port.ac.uk/mab/Tutorials/

Dr Zoltan Juhasz
University of Veszprem, Hungary

Dr Zoltan Juhasz is a Senior Lecturer (Assoc. Prof.) in the Department of Information Systems of the University of Veszprem, Hungary, where he leads parallel and distributed computing research. His group is focusing on large-scale grid systems research, such as wide-area service discovery, dynamic and adaptive grid middleware, and the use of Java and Jini technologies in large service-oriented systems. His research interests also cover performance prediction of parallel and distributed systems as well as multi-agent systems and their application in grids. Currently, he leads a major grid project in Hungary whose aim is to develop a reference wide area Jini grid system. In the past Zoltan also held permanent and visiting appointments in The Queen's University of Belfast and the University of Exeter.

With Mark he presented tutorials on Java, Jini and Grid at the Euro-Par 2001 (Manchester) and CCGrid 2002 (Berlin) conferences.

back

Under the hood of Rotor, the Microsoft Shared Source CLI implementation

Peter Drayton

(Microsoft Research)

Abstract

Released in 2002, the .NET Framework represents the next major evolution of the Microsoft computing platform. At the core of this initiative is the Common Language Runtime (CLR), which provides a language-agnostic runtime that enables multi-language component-based development. The resulting multi-vendor adoption of Microsoft's .NET initiative means that the language interoperability and integration solutions promised in the 90's are now becoming a pervasive commercial reality, allowing language researchers to innovate in their particular domain while still interoperating with existing commercial and research-oriented language solutions.

Furthermore, the key specifications defining the CLR have also been standardized by ECMA & ISO as the Common Language Infrastructure (CLI), creating a standards-based platform for component development and cross-language integration. In late 2002, Microsoft released Rotor, a "Shared Source" implementation of the CLI available on Windows XP, FreeBSD and Mac OS X. For language designers, Rotor serves as an effective runtime core for experimentation at the language feature level. For compiler and virtual machine researchers, Rotor provides a context for applied research into alternative object representations, method dispatch, garbage collectors, JIT compilation and a host of other topics.

The goal of this tutorial is to provide an in-depth exploration into Rotor and the ways in which it implements key mechanisms in the CLI specification. Attendees to this tutorial will leave with an understanding of the core CLI abstractions, how Rotor implements these abstractions, and areas in which Rotor and could be used or extended by language, compiler and virtual machine researchers.

CV

Peter Drayton is engaged with Microsoft and has essential contributions to the development of the .NET framework.

back

 

 

Last update by Peter Schojer 16.06.2003