After obtaining a diploma of electrical engineer in 1994 at EPFL, Alain Dufaux joined the Institute of Microtechnology (IMT) of the University of Neuchâtel, in the Electronics and Signal Processing Laboratory. His research activities were first dedicated to speech compression and then audio processing in general. His PhD thesis, presented in 2001, was about automatic detection and recognition of impulsive sound signals.
In 2001, Alain Dufaux participated to the launch of the startup company Dspfactory SA in Marin, emerged from Dspfactory Ltd. in Canada. Active in the field of ultra-low-power DSP processors for applications like hearing aids, consumer audio or medical devices, this company was acquired in 2004 by the worldwide company AMI Semiconductor. During 6 years, Alain was involved in low-power signal processing activities, successively as:
Head of signal processing group for advanced applications;
Senior member of the Applied DSP and embedded software group in Canada;
Signal processing specialist in the european customer support team, especially active in trainings and tutorials dedicated to low-delay multirate filterbanks and low-power signal processing techniques for hearing aids.
Alain Dufaux was back at EPFL in November 2006, as head of the Vision & Embedded DSP Group in the Laboratory of Microengineering for Manufacturing (LPM). In this role, most of the effort was dedicated to the team management, setup of research projects / industrial partnerships, and to the support or co-direction of PhD students. In October 2008, Alain was enrolled as co-lecturer for the course “Méthodes de Production”.
In Spring 2011, Alain Dufaux has joined the MetaMedia Center (MMC). Acting as project manager, his activity is split between the operational tasks of the Montreux Jazz Digital Project, and the numerous innovation projects initiated by the MMC in partnership with the labs of EPFL working around acoustics, signal processing or multimedia. Alain is also involved in the selection and design of the audio-visual experiences that will be proposed to the public of the future Montreux Jazz Lab, based on brand new innovative multimedia technologies.
Talk 1.1: The Montreux Jazz Digital Archive
“This is the most important testimonial to the history of music, covering jazz, blues and rock! ” said Quincy Jones.
The EPFL MetaMedia Center (MMC) has the great honour of organizing the digitization and the revival of the Montreux Jazz Festival recordings, an archive covering half a century of concerts featuring the most talented artists and built using the state-of-the art audio and video technologies across the years.
Since 1966, 5000 hours of audio and 5000 hours of video were recorded on multiple tape formats. In partnership with Montreux Sounds SA and the company Vectracom in Paris, this content is currently in the process of digitization. Up to now, half the archive was digitized. The new reference media are stored on digital tapes in uncompressed formats. In addition, the same content is slightly compressed in broadcast format, and copied onto a new generation hard-drive storage system provided by the start-up company Amplidata. From this hard-drive system, the archive will be made available to the research labs at EPFL, either in download or streaming mode. The project is meant to take care of this archive, and keep it alive for many years
Talk 1.2: Making the Archive Alive
More than just storing digital data on tapes and hard drives, the MetaMedia Center has the ambition of making the archive of the Montreux Jazz Festival alive.
As a first project, the Montreux Jazz Heritage Lab allows the public to enjoy the concerts of the archive in a comfortable and immersive environment. A small room designed with innovative architecture, was built at the EPFL+ECAL lab, in partnership with several labs and startups of EPFL. It is equipped with a large screen and spatial audio technology. A tactile screen table designed for comprehensive ergonomy proposes the user to navigate in the archive, visit and select the artists or concerts to look at.
In a second step, taking this as an opportunity to show and promote the EPFL research around architecture, acoustics, signal processing and multimedia, today’s most innovative technologies will be applied to the archive, and shown to the public of a new building to be opened in 2014 on the campus: the Montreux Jazz Lab. This building will be a special kind of Montreux Jazz Café, where the brand new technologies will be demonstrated to the public, and applied to the Montreux Jazz recordings.
In this perspective, and in the goal of promoting technology transfer, the MetaMedia Center is defining projects across the numerous labs of EPFL, to bring the new ideas developed by researchers to a level that can be of high interest to investors. In collaboration with industries and emerging startup companies, prototypes of innovative products and software applications will be built and demonstrated to the public in the Montreux Jazz Lab.
I did my Master in Communications Systems at EPFL, and graduated in 2007. I, then, started to work for Nagravision (Kudelski Group) in the “Advanced Development” team as software engineer. After three years, I moved to EPFL, and joined the MetaMedia Center where I’ve been working for almost two years now.
Talk 1.3: A Database for the Montreux Digital Archive
The archives of the Montreux Jazz Festival are obviously made of video and audio recordings but there’s more than that. Along with the recordings comes a massive amount of metadata that describe and complement the archives. This covers a wide area of topics, from tape format to recording quality, from song lists to musician’s roles, without forgetting author’s rights and particular events occurring during the concerts.
The digitization of the Montreux Jazz Festival archives is a multi-partner project, where each partner provides some part of the metadata. Collecting and storing this information is crucial to allow exploitation of the archives. Indeed, browsing or searching through archives that have few or bad metadata takes a lot of time and brings lot of frustration. Therefore, one of the highest priorities of the EPFL MetaMedia Center (MMC) is to ensure that accurate and useful metadata are associated to each digital recording.
When the project started, the idea of a central database hosted at EPFL came up rapidly. The different categories of information provided by the project partners are inserted in the database and linked together. The database groups all types of metadata that are closely or remotely related to the archives and, thus, acts as a unique reference point.
On top of the database, the MMC team developed a platform which allows using the archives data. Indeed, a database alone is not sufficient, and a full set of tools have been created to insert data, update data, and search through the archives. Interfaces have also been created for the partners of the project so that they can directly enter their metadata into the database. The database was developed using Scala, a language created at EPFL.
Philippe Hanhart was born in Lausanne, Switzerland, on February 2nd, 1987. He received his M.Sc. in Electrical Engineering form the Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland, in October 2011. From February 2011 until August 2011 he performed his Master thesis on Development of a fast block based stereo matching algorithm and Investigation of a novel view synthesis technique using sparse disparity maps, at Dolby Laboratories Inc., Burbank, California, USA. Since December 2011 he is doctoral assistant in the Multimedia Signal Processing Group, which is led by Professor Touradj Ebrahimi, at the Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland. His research interests are in the fields of video processing, computer vision and computer graphics for 3D quality assessment, depth extraction, view synthesis and 3D/multiview video compression.
Talk 1.4: Defect Detection and Enhancement for the Montreux Jazz Video Archive
The digitalization of old analog films is a very delicate process that may produce digital video sequences where defects are visible. In the framework of the Montreux Jazz Digital Archive project, a cooperation with the Multimedia Signal Processing Group of EPFL has been set up to develop a software solution for automatic defect detection and restoration of the video sequences of the digital archive.
Typical defects include, among others, periodic and aperiodic static horizontal color lines/bands, moving horizontal lines, diagonal thin lines, and dropouts. First, the defects have to be detected automatically by specialized algorithms, which take into account the different characteristics of the defects. Finally, specific restoration mechanisms can be applied to remove the detected defects.
The automatic defect detection and restoration is a very challenging task as the goal is to remove the artifacts without altering the original content.
Dr. Hervé Lissek was born in Strasbourg, France, in 1974. He graduated in fundamental physics from Université Paris XI, Orsay, France, in 1998, and received the Ph.D. degree from Université du Maine, Le Mans, France, in July 2002, with a speciality in acoustics. From 2003 to 2005, he was a Research Assistant at Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland, with a specialization in electroacoustics and active noise control. Since 2006, he has been heading the Acoustic Group of the Laboratoire d’Electromagnétisme et d’Acoustique at EPFL, working on numerous applied fields of electroacoustics and audio engineering.
Dr. Patrick Marmaroli was born in Saint-Julien-en-Genevois, France, in 1984. He received a M.Sc. degree in signal processing and trajectography from Sud-Toulon Var University, France, in 2008. The same year, he enrolled as a PhD student at the Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland, where he developed acoustic signal processing techniques for microphone and loudspeaker arrays applications, as well as autonomous sound source localization/tracking. He holds now a PhD degree since december 2012. His current research interests include acoustic array processing for denoising, localization and multi target tracking.
Talk 1.5: Acoustic User Experiences for the Montreux Jazz Lab
The Montreux Jazz Festival archives covers almost a half-century of live jazz, rock and pop music, gathering international first-class musicians within a single catalogue. In the frame of the digitization and valorisation of the whole archives by EPFL research groups, the question arose on how to foster the best listening experience in dedicated listening rooms, with the help of the most up to date acoustic techniques developed at the Laboratory of Electromagnetics and Acoustics. This question yielded to the construction of two breakthrough acoustic prototypes, the SoundRelief and the SoundDots, that should lead to new concepts of multipurpose listening spaces. In parallel, the advanced acoustic signal processing techniques developed at the laboratory were applied to the automatic recognition of sound events in the whole recording flux, that will help the digitization process, through features such as applause/speech/music detection, metadata identification or time-code assignment.
Dr. Istvan Sebestyen has been since April 2007 the Secretary General of Ecma International (one of the ICT standards organizations) in Geneva, Switzerland. Before that between 1985-2006 he worked as standards professional at Siemens Communications in Munich in Germany in various positions from Manager to Director of Standards. His standards related activities included mainly Communication Terminals, Private Networks and Multimedia Standards in the CCITT, ITU, ISO, ISO/IEC JTC1, ETSI, CEN/CENELEC, DIN and SNV. As such he has held numerous high level positions in those standards bodies and was also part of the JPEG, MPEG and JTC1 SC29 founding teams. For JPEG, he represented for 15 years as Special Rapporteur as “JPEG parent organization” the CCITT and later ITU-T side. Between 1985-2006 he has also been University Tutor at the University of Klagenfurt, Austria. Teaching subjects: Office Automation, New Media, Informatics, Telematics, Data Networks, Data Communication, Telecommunication Services and Applications. Between 1983-1985 he was Visiting Professor at the Institutes for Information Processing of the Technical University of Graz and the Austrian Computer Society. Special fields in research and development: New generation videotex systems, especially intelligent videotex decoders and their applications. Lectures in applied data processing, architecture of videotex network. Seminars in applied information processing, informatics and telematics.
Talk 2.1: The Legend Begins – the JPEG Story from the 1980s
The presentation deals (based on the original standard documents) with the very early history of the Joint (ISO and CCITT) Photographic Expert Group (JPEG) until January 1988. That was the time when the ADCT technology was selected as base of the popular JPEG still picture standard – approved later in 1992 by the ITU and in 1993 by ISO/IEC JTC1. The presentation explains the different motivations behind the standard and how the standardization process progressed until the milestone decision of the selection of the technology for the JPEG standard. Of course at that point the history of JPEG does not end, but that is left for other presentations sometime in the future…So, stay tuned.
Alain Léger (PR associated, DR Habil, Ph.D, Ing.), Aged 63, was director of scientific programme at FT R&D on “Knowledge Processing and Data Analysis” for the Direction of Research at FT R&D (1996 – 2007). He was leading ISO international ADCT-JPEG ad-hoc group during the algorithm selection and competition (1995-1998). He was executive member or scientific coordinator of many AI related FP6-FP7 European projects (ABS, Abrose, Mkbeem, NoE Ontoweb and NoE KnowledgeWeb) (1998-2007). He is author or co-author of about 70 papers. He received Best paper award at IEEE Web Intelligence conference and 2nd price AI (AFIA) with his last PhD student (Lécué 2007). He is now retired from FT R&D (Orange Labs) but he is still active in Master curricula (Univ St Etienne) for Knowledge Representation and Reasoning (KRR) and as member of KRR related conference committees.
He is much involved now in family and friends times. He is very active for human rights respect at fourth world movement (ATD) and is regularly supporting in Maths and Sciences young students in difficulty. Hobbies are wood work, drawing, painting and plastic arts, Greek and Italian cultures to name few.
Talk 2.2: Historical Foundations of the JPEG Algoritm
It will be a recollection of the main parts of the historical foundations of the known today JPEG algorithm. It will be focused on the most relevant aspects that made the ADCT the selection winner in Copenhagen (Jan 1988). As a collective work it will also tentatively underline the key players of the ADCT-JPEG architecture.
Fumitaka Ono has received BE, ME and Ph.D from the University of Tokyo respectively. He was working with Mitsubishi Electric Corp. for 27 years, and has been Professor of Tokyo Polytechnic University from April 2000. His interested area covers image coding, and image processing including information hiding. He has been IEEE member for 36 years, and IEEE Fellow from 1995. He has been engaged in the international standardization work since 1985, and is currently ISO/IEC JTC 1/SC29/WG1 JBIG Rapporteur. He has received Awards from Ministry of Education and Science, and also from Ministry of Industry and Trade, for distinguished achievements on image coding and the contribution for standardization activities, respectively.
Talk 2.3: JPEG/JBIG History and Future Issues
JBIG(ITU-T T.82|ISO/IEC 11544) is a bi-level image coding standard common to ITU-T and ISO/IEC JTC 1. It was standardized in 1993 under the prior body of SC29/WG1/JBIG. JBIG stands for Joint Bi-level Image coding Group, which branched off from JPEG, in order to produce a standard tuned for bi-level image documents.
The first generation standard of bi-level image coding was MH (Modified Huffman) & MR (Modified READ) defined under ITU-T for facsimile coding, in 1980. The problems of MH & MR were inefficiency for halftone image coding and difficulty to apply for progressive coding. The aim of JBIG standardization was to solve these issues and after the success of its standardization, JBIG coding was also adopted as optional coding scheme of facsimile in ITU-T as T.85. One of the great features of JBIG is the first adoption of arithmetic coding as the international standard. It is known as QM-coder and was adopted as sole entropy coding method in JBIG and as optional entropy coding in JPEG.
JBIG2 (ITU-T T.88|ISO/IEC 14492) was defined in 2001 as the successor of JBIG. JBIG2 combines JBIG-like bitmap coding mode and pattern-matching coding mode in a quite elegant way. By using pattern-matching techniques, JBIG2 can compress text-type documents into 5 times smaller size compared to the case of JBIG1 though in lossy compression. JBIG2 also compresses periodical halftone images more efficiently than JBIG using pattern-matching techniques. JBIG2 is also adopted as optional facsimile coding in ITU-T(T.89), and used in the applications of PDF format and Google Books, because of its remarkable benefit in high volume usage.
In the talk, we will introduce the history and the technologies adopted in JBIG/JBIG2 and the future study items of JPEG/JBIG team.
Richard graduated from Churchill College, Cambridge in Electrical Sciences in1971. he went on to work at British Telecom, where he was charged with developing the standards for new non-voice services on System X – BT’s forerunner to ISDN. He was part of the department that worked closely with other groups to form the fledgling Internet, as described in Vint Cerf’s interview on ‘How the Internet came to be’.
As well as working on early network standards, Richard was active in the fledgling development of facsimile, email, and videotex standards, and was BT’s representative into the CEPT Subscriber Equipment committee, and through BSI into ISO as a character coding expert. He helped standardise BT’s videotex service Prestel, following a secondment to study a Masters at Imperial College in 1973, developing the standard in competition with other international offerings. He was very active in early proposals to enhance videotex with photographic capability, and the emerging photovideotex standard was a key driver in his proposals to split the work in ISO’s character coding technical committee, TC97/SC2, which was concentrating on the development of Unicode, into a separate Working Group, WG8, which eventually became today’s ISO JTC1 SC29, responsible for all MPEG and JPEG standards activity.
He left BT in 1979, acting as Principal Engineer into one of the world’s first multi-disciplinary consultancy, Communications Studies and Planning, who carried out many key studies for major organisations around the world. With Michael de Smith, Principal Mathematician at CS&P, he formed a new company, eventually to become the Applied Telematics Group, and within its subsidiary, Viewtext, developed the first commercial photovideotex system, Computex, launched in 1986.
He left ATG in 1991 to form Elysium Ltd, in an attempt to focus on the emergence of hypertext systems, which evolved into the World Wide Web, and which he had worked on in Viewtext and BT, in order to help with their standardisation. He still manages Elysium Ltd, who have been active in JPEG standardisation since its inception, as well as contributing to standards on accessibility, and the development of distribution systems for standards information, notably in BSI and with Perinorm. Elysium have operated the JPEG web site on a pro bono basis since 1996, and also the MPEG web site for some years and that of the International Color Consortium, on a commercial basis.
Richard is JPEG’s honorary webmaster, and has chaired the Historical Archive Group since he helped form it to counter the threat to the JPEG standard from many patent claims, in some of which he has appeared as an expert witness on process. He has edited a number of key standards, including JPEG, JPEG-LS and JPEG 2000 parts. As well as this work, Elysium have contributed to the development of a number of key Open Source projects, including Linux, Apache, and PHP. Richard also chaired the UK counterpart to SC29 for over ten years during the time that its key standards were published.
Richard was a founding Board Member of the journal Computer Communications in 1978, was chairman of the organizing committee for the first international ISDN conference in London in 1979, and also gave the keynote presentation to the Society for Information Display in New York in 1981. He was awarded a Distinguished Service Certificate by BSI in 2001, and a spevcial Management Services award from SC29 in 2005, citing his ”outstanding performance, dedication, patience, creativity and organization” in his work on the JPEG website and document repository.
Talk 2.4: Concluding Notes on JPEG History
Phd in mathematical physics at the TU-Berlin in 2000, Project manager for image compression 2000-2002 for Algovision GmbH in Berlin, Position at the multimedia department of the TU-Berlin 2002-2007, Contractor of Pegasus Imaging (now Accusoft) since 2003, joined JPEG in 2005, Division manager for virtual and remote experiments at the Computing Center of the University of Stuttgart since 2007.
Talk 3.1: JPEG Standard for Coding High Dynamic Range (HDR)
In this talk, the goals of the new emerging JPEG standard for coding of High Dynamic Range still images will be defined, and an architectural overview on the new emerging JPEG standard for coding of High Dynamic Range still images will be introduced. Topics are not only the coding aspect, but cover also the history of the design and methods used for evaluation of the participating bodies.
Talk 3.2: Considerations on Dynamic Range Imaging
Mohamed-Chaker LARABI received his PhD from the University of Poitiers (2002). He is currently the associate professor in charge of the perception, color and quality activity at the same university. His actual scientific interests deal with image and video coding and optimization, 2D and 3D image and video quality assessment, and user experience. He works on Human Visual System modeling (spatial, temporal and spatio-temporal, binocular) for the enhancement of algorithms such as compression, digital cinema, etc. Chaker Larabi is a member of the French National Body for the ISO JPEG committee (since 2000)/MPEG and chair of the Advanced Image Coding group and, the Test and Quality SubGroup. He serves as a member of divisions 1 and 8 of CIE, is a member of IS&T, and a senior member of IEEE. He is involved in many local, regional, national and international projects.
Talk 3.3: Report on JPEG AIC Activities
Talk 4.5: Role of Structural Migrations in Quality Preservation for Archives
Quality preservation for archives is a very important topic and deserves to be addressed carefully. Images are composed of a complex structure that varies according to the applied processing (compression, enhancement …). The contained structural information is a very important indicator about the perceived quality. Based on some observations, this talk will define the structural migrations in an image, discuss their modeling and study their application for quality estimation.
Siegfried Foessel received his Diploma degree in Electronic Engineering in 1989. He started his professional career as a scientist at the Fraunhofer Institute for Integrated Circuits IIS in Erlangen and was project manager for projects in process automation, image processing systems and digital camera design. In 2000 he received his Ph.D. degree on image distribution technologies for multiprocessor systems. Since 2001 he focusses on projects for digital cinema and media technologies. He was responsible for projects like ARRI D21, DCI certification test plan or JPEG2000 standardisation for Digital Cinema. Siegfried is member of various standardisation bodies and organisations like SMPTE or ISO. In ISO SC29/JPEG he is chairing the systems group. Within the EDCF he is member of the technical board. Since 2010 Siegfried is head of the department Moving Picture Technologies, spokesman of the Fraunhofer alliance Digital Cinema and vice president of the FKTG, the german equivalent to SMPTE.
Heiko Sparenberg, born in 1977, received his Master degree in Computer Science from the University of Hagen, Germany, in 2006. He started his professional career as an engineer at the Fraunhofer Institute IIS in Erlangen. He is head of the Group Digital Cinema. His research topics are scalable media-file management, post-production software in the field of Digital Cinema and image-compression algorithms (JPEG2000).
Talk 4.1: Use of JPEG2000 for Archiving
JPEG2000 is used in many archive applications from different manufacturers. The presentation will describe the outcome of the EU-project EDCINE, in which new archive profiles for storage applications were defined.
As a consequence a software system named “Curator Archive Suite” based on these profiles was developed by Fraunhofer for high quality archiving of film content. The system architecture of this software will be described in detail. Furthermore this software will be used in an European Pilot project called EFG1914, in which more than 20 European archives will deliver content from the first world war to the Europeana gateway.
At the end an outlook will be given to the standardization activities within SMPTE for IMF (Interoperable Master Format), in which a master archiving format (extended level #2) based on JPEG2000 is under discussion.
Werner Bailer received a degree in Media Technology and Design in 2002 for his diploma thesis on motion estimation and segmentation for film/video standards conversion. He is working on a PhD thesis on multimedia content abstraction. Currently he is a Key Researcher at the Audiovisual Media Group of DIGITAL – Institute for Information and Communication Technologies at JOANNEUM RESEARCH in Graz, Austria. His research interests include audiovisual content analysis and retrieval, preservation of audiovisual media, and multimedia metadata. He has many years of experience in European and national research projects, is author of 50+ peer reviewed publications, and has made contributions to standardisation in MPEG and the W3C.
Talk 4.2: MPEG Multimedia Preservation Activities
Understanding the importance for the preservation of digital multimedia used in many different domains including cultural heritage, scientific research, engineering, education and training, entertainment, and fine arts, MPEG has started to work on the standardization of the Multimedia Preservation Archival Format. At this meeting, MPEG reviewed the responses to the call for proposals. Responses showed that MPEG has wide range of technologies to be used for multimedia preservation such as Professional Archival Application Format, MPEG-21 Digital Item Description Language and various MPEG-7 audio-visual descriptors. MPEG will continue to evaluate the submissions and develop standards in its coming meetings including a working draft in April 2013 that will achieve the status of Final Draft International Standard in April 2014.
Jean-Pierre Evain joined the EBU’s Technical Department in 1992 to work on “New Systems and Services” after several years spent in the R&D laboratories of France-Telecom (CCETT) and Deutsche Telekom. He is now looking after “Media Fundamentals and Production Technologies” and coordinates all EBU technical activities concerning metadata and new production architectures. He is the co-author of several EBU metadata specifications. He is actively promoting the use of semantic web technologies in broadcasting. He is the Project Manager of the joint AMWA-EBU FIMS Project on Service Oriented Architecture. He represents EBU in many standard groups and industry forums like AES, ETSI, IPTC, MPEG, SMPTE, UK-DPP, W3C, among several others.
Talk 4.3: EBU Preservation and Related Activities
The presentation will address past and recent EBU activities related to preservation. EBU recommendations on how best manage archives from a content, metadata, rights and methodology (what, when, how) perspective will be summarised. A new architectural approach (FIMS) for better workflow integration including archiving and preservation processes will be introduced. EBU is also doing a lot of work on new content and file formats (HD, UHDTV, AXF), metadata, quality control, which all are important constituents of essence and information to be preserved. A lot of this work is done in collaboration with external bodies such as AES, IPTC, ITU, MPEG, SMPTE, W3C, among others.
Peter Schelkens holds a professorship at the Department of Electronics and Informatics (ETRO) at the Vrije Universiteit Brussel (VUB). Peter Schelkens is research director at the iMinds institute (www.iMinds.be) founded by the Flemish government to stimulate ICT innovation. Additionally, since 1995, he has also been affiliated to the Interuniversity Microelectronics Institute (www.IMEC.be), Belgium, as scientific collaborator. Since 2010, he became a member of the board of councilors of the same institute. From 2002 till 2011, Peter Schelkens held an postdoctoral fellowship with the Science Foundation – Flanders (FWO).The research interests of Peter Schelkens are situated in the field of multidimensional signal processing encompassing the representation, communication, security and rendering of these signals while especially focussing on cross-disciplinary research. Peter Schelkens has published over 200 papers in journals and conference proceedings, and he holds several patents and contributed to several standardisation processes. His team is participating in the ISO/IEC JTC1/SC29/WG1 (JPEG) and WG11 (MPEG) standardization activities. Peter Schelkens is the Belgian head of delegation for the ISO/IEC JPEG standardization committee, editor/chair of part 10 of JPEG 2000: ‘‘Extensions for Three-Dimensional Data’’ and PR Chair of the JPEG committee. From 2012 onwards he is acting as rapporteur/chair of JPEG Coding and Analysis Technologies, overlooking image processing technologies embedded in all JPEG standards. He is co-editor of the books, ‘‘The JPEG 2000 Suite’’ and “Optical and Digital Image Processing”, published respectively in 2009 and 2011 by Wiley. He is a member of the IEEE, SPIE, and ACM, Belgian EURASIP Liaison Officer and committee member of the IEEE Image, Video, and Multidimensional Signal Processing Technical Committee (IVMSP TC).
In 2011, he was acting as General (co-)Chair of the following conferences: IEEE International Conference on Image Processing (ICIP, www.icip2011.org), Workshop on Quality of Multimedia Experience (QoMEX, www.qomex2011.org) and Workshop on Image Processing for Art Investigation (IP4AI, www.ip4ai.org). Peter Schelkens is also co-founder of the spin-off company, Universum Digitalis (www.universumdigitalis.com). His team is also member of the Intel Exascience Lab (www.exascience.com) in Belgium.
Talk 4.4: Closer to Van Eyck: Rediscovering the Ghent Altarpiece
The website “Closer to Van Eyck – Rediscovering the Ghent Altarpiece” presents the Ghent Altarpiece (1432) – Van Eyck’s famous polyptych – in visual light macrophotography, infrared macrophotography, infrared reflectography and X-radiography. Additionally, multiple extreme close-ups of selected details are available in the first two modalities. In total, the website contains more than 100 billion pixels of image data, hence processing and presenting such a huge amount of data posed significant challenges. This website is the result of the preparation of a restoration campaign of the Ghent Altarpiece, which included the assessment of the current structural condition of the altarpiece.
The polyptych panels were photographed in a regular overlapping grid, each image block being a 200 MB large photograph of 4992×6668 pixel resolution, capturing a 22.4×16.7cm surface area of the painting.
These image blocks were semi-automatically stitched to reconstruct the whole panels, and in order to obtain a smooth transition from one block to another in the stitched panel, several image artefacts were taken into account (e.g. focus, lighting …). This process was followed by a registration procedure to automatically align of the different imaging modalities. The algorithmic approach avoids the tedious, if not impossible, task of manually stitching and registering thousands of image blocks and the developed workflow can be readily applied to any new material obtained during the upcoming restoration of the painting and, furthermore, it enables a direct “before” and “after” restoration comparison.
The extremely high-resolution images were disclosed to the public. A web application was built that allows visitors to navigate through the images, zoom in on details, compare modalities and share any detail with friends or colleagues. The JPEG 2000 image file format was used to significantly reduce the required storage space and to enable efficient access to the image data.
The authors would like to thank Prof. Dr. Ron Spronk, the Getty Foundation and NWO for the support obtained through the Lasting Support and Web application Ghent Altarpiece projects.
Director, Open Source and Standards, XTO
Adobe Systems Incorporated
Dave is focused on the company’s long-term strategic direction as it relates to leveraging standards and open-source technologies to position and advance Adobe technologies and interest. He is Adobe’s representative to Ecma International, W3C eGov initiative, Linux Foundation and other industry associations. His most recent interest is in gamification, applying game mechanics in non-game situations. He led Adobe’s effort to formalize PDF as an ISO standard, resulting in unanimous approval of ISO 32000.
He often speaks on topics such as the real-world issues associated with open-source software and on creating new technology companies. Well versed in trivia, he won a Golden Penguin in 2002. He has held seats on Advisory Boards for Sistina, Woven Systems, Pathworks, Zetera and ConcreteCMS. He is currently on the Reader Advisory Board for Linux Journal.
Talk 4.6: Construction and Standardization of PDF/A
Since its introduction in 1993, PDF has been one of the most successful document presentation formats around. However, documents also need the ability to archive, consistently rendered to return the original intent of the author.
So what lessons have we learned from the creation of PDF/A, from the original design with Adobe PDF 1.4 and its creation of ISO 19005-1 in 2005? What makes up the digital envelope of PDF/A and how has it adapted over the years to reflect changes in needs and understandings of archives. Whats the current version offer, and is there opportunity to extend this highly successful format into supporting of certain media types and needs.
Talk 4.7: Privacy Protection for Digital Archives
Vincenzo Croce is an Engineer in Computer Science. Since February 2001, he works as Project Manager in Engineering’s R&D laboratory. He was involved with technical responsibilities in many European research projects. Moreover, he was project manager with technical responsibilities of PHAROS project -Platform for searcH of Audiovisual Resources across Online Spaces- FP6 IST Integrated Project. At this time he is managing iSearch project – FP7 IST A unified framework for multimodal content SEARCH project and is coordinating CUbRIK project. During his professional life, he has also authored scientific publications presented at international conferences.
Talk 4.8: Rich Unified Content Description (RUCoD)
Since the beginning of the age of Web search engines in 1990, the search process has been associated with textual input. From the first search engine, Archie, to state-of-the-art search engines like WolframAlpha, this fundamental input paradigm has not changed. Apart from text, human voice has been recently supported as input modality (Apple’s Siri for iOS, Google’s Voice Actions for Android, Voice Search for desktop computers). However, what is still missing is a fully multimodal search engine. When searching for slow, sad, minor-scale piano music, the best input modality would be to simply upload a similar audio sample. When searching for Times Square, New York, the best input modality might be the coordinates (geo-location) of Times Square and a photo of a yellow cab (image). Within ISEARCH project, we move beyond traditional text-based search to a more explorative multimodality-driven search experience in order to support different search needs.
In an attempt to support multimodal search and retrieval, in the context of I-SEARCH, we have introduced the concept of Content Objects, which are rich media representations, enclosing different types of media, along with real-world information and user-related information. To achieve search and retrieval of Content Objects, a suitable description scheme has been defined within I-SEARCH. The so-called Rich Unified Content Description (RUCoD) provides a uniform descriptor for all types of Content Objects irrespective of the underlying media and accompanying information. In the sequel, we will provide an overview of I-SEARCH and we will outline its general objectives and significant achievements. Then, we will elaborate on the proposed RUCoD description format and we will describe the details of our system for the management of overall media processes leveraging on RUCoD format representation. Moreover we will give an outlook on future work.
After studying Physics and Astronomy, PD Dr. Lukas Rosenthaler made its Ph.D. 1989 at the Institute of Physics in the field of Nanophysics in the group of Prof. J. Güntherodt. He collaborated in the construction of one of the first Scanning Tunneling Microscopes (STM) at the University of Basel where he was responsible for the data analysis and visualization.
From 1988 to 1992 he worked as PostDoc at the Image Science Lab of the Swiss Federal Institute of Technology in Zürich (ETHZ) in the group of Prof. O. Kübler. Together with psychologists and neurophysiologists he developed a mathematical-agorithmic model of visual perception in human beings. This model successfully explains some perceptual phänomenons such as some optical illusions. During this time he gave lectures in image processing and analysis at ETHZ.
1992 he joined Cadwork AG where he was responsible for the software development in the field of 3D-visualization, animation, user interface design and databases. At the same time, he was a free-lance collaborator with the Scientific Photography Lab of the Dept. of Chemistry of the University of Basel. His research topic was the digital restoration of movie pictures. Since 2001 he is a full-time staff member and since 2012 he is a head of the Imaging & Media Lab of the Faculty of Humanities at the University of Basel. His research is focused on the preservation of the audio-visual heritage (including digitization, long-term archiving, restoration of photographic collections and moving image archives and new access tools). Projects he is leading or is involved are “DISTARNET”, a digital long-term archive using distributed systems based on a P2P architecture, “PEVIAR”, the Permanent Visual Archive using microfilm as digital storage and “ReteFontium”, a new collaborative tool for the Humanities to work with digital sources. Further, he plays a vintage Hammond organ and Moog synthesizer in a Hardrock- and Blues-band celebrating the sound of the seventies and can often be seen and heard on-stage in the region of Basel.
Talk 4.9: Preservation Projects at Basel University
Since more than 15 years, the long-term preservation of digital data has been the main focus of research at the Imaging and Media Lab (IML) of the University of Basel. Now, in 2013, it can be safely stated that the long-term archival of digital data is technically feasible and sufficient knowledge has been acquired to guarantee the longevity of data. The OAIS reference model offers a firm theoretical base for archiving processes of digital data, and there are different technical solutions to the specific problems such as open, well documented file format standards (e.g. PDF/A, TIFF, J2K for images). Basically there are two approaches possible
- the migration process where we developed a model for a distributed, self-migrating archive (DISTARNET) and
- the use of an open, technology-independent long-term storage for digital data on Microfilm (Bits-on-film, Monolith) which is commercially available.
While these methods work fine for static digital data such as image files, sound files etc., there are 2 problems which still are open and currently in the focus of our work:
- While the technological side seems to be solved, it is still unclear which institutions should take care of the digital assets. Traditional archives and libraries often are not well prepared for this task having a lack of knowledge and resources for dealing with large amounts of digital data to preserve.
- While digital data (defined as static digital files) can be preserved, there are still many open questions about dynamic resources such as online databases. This is especially for highly structured research data that should be made available indefinitely to the research community. Thus the focus of research at the IML is changing to these topics. Especially, the combination of virtual research system (VRE) with long-term archiving seems a very promising path.