max planck institut
informatik
mpii logo Minerva of the Max Planck Society University of Saarland

Invited Talks

Invited MACIS Speakers

Hoon Hong
Hoon Hong Hoon Hong obtained his PhD under supervision of G. E. Collins in 1990. Currently he is Professor at the Departhment of Mathematics of the North Carolina State University. Prof. Hong is the Editor-in-Chief of the Journal of Symbolic Computation. His research interests include quantifier elimination, cylindrical algebraic decomposition, algebraic geometry and automated deduction in geometry.

Overview on Real Quantifier Elimination

Real quantifier elimination problem arises in numerous areas in logic, mathematics, science and engineering: to name a few, geometric theorem proving, topological analysis of curves and surfaces, verification/analysis/synthesis of algorithms, robot motion planning, design of robust control systems, analysis of bio-chemical reactions, statistical model evaluation, and many others. Due to its importance, extensive research has been done and is still being actively carried out. This talk will provide a brief overview on general ideas and various special methods for particular application domains.

David Monniaux
David Monniaux David Monniaux is senior researcher at CNRS, the largest French national scientific research agency, and works at computer science laboratory VERIMAG which is operated by CNRS and the University of Grenoble. He is also part-time professor at Ecole polytechnique near Paris, where he teaches logic, computability, complexity and software verification. His research focuses on proving software correctness. In particular, he has been working on proving strong properties of critical software, cryptographic protocols and probabilistic systems.

Proving and Inferring Invariants

Program analysis consists in automatically proving properties of computer programs. This is in general an undecidable problem. Yet, there exist methods that work for many programs, enough to be of industrial significance. Analysis of programs with loops or recursive functions relies on the notion of invariants; while certain methods require these invariants to be input by the user, others attempt to infer them. Certain invariant inference problems can be recast as optimization or numerical constraint problems; we'll describe a few methods for solving these.

Zongben Xu
Zongben Xu Dr. Zongben Xu is Academician of the Chinese Academy of Sciences and Professor of Mathematics & Computer Science. He now serves as Vice President of Xi'an Jiaotong University and as Chief Scientist of the National Basic Research Program of China. His current research interests include nonlinear functional analysis, mathematical foundations of information technology and computational intelligence.

Learning through Deterministic Assignment of Hidden Parameters

Supervised learning boils down to determining hidden and bright parameters in a parameterized hypothesis space based on a finite number of input-output pairs. The hidden parameters are those parameters that determine the hidden predictors or nonlinear mechanism of an estimator, while the bright parameters are those characterizing how the hidden predictors are linearly combined or the linear mechanism. In traditional learning paradigms, the hidden and bright parameters are not distinguished and trained simultaneously in one learning process (or saying, in one-stage). Such an one-stage learning (OSL) brings a benefit of theoretical analysis but suffers severely from the very high computation burden. To overcome this difficulty, the two-stage learning scheme (TSL), that could be named as the learning through random assignment of hidden parameters (LRHP), was developed in past years, which assigns randomly the hidden parameters in the first stage and determines the bright parameters by solving a linear least squares problem in the second stage. LRHP works well in many applications but suffers from the uncertainty problem: Its performance can be guaranteed only in a certain statistical expectation sense.

In this talk, we report a new scheme of TSL: The learning through deterministic assignment of hidden parameters (LDHP). Motivated from the study on the classical hard sphere problem in mathematics, we propose to deterministically take the hidden parameters as the tensor product of the minimal Riesz energy points on sphere and the best packing points in an interval. We theoretically show that with such deterministic assignment of hidden parameters, LDHP shares the same generalization performance with that of OSL, i.e., does not degrade the generalization capability of OSL. Thus LDHP provides an effective way of overcoming both the very high computation burden in OSL and the uncertainty difficulty in LRHP. We present a series of simulation and application examples to support this advantage and the outperformance of LDHP, as compared with a typical one-stage algorithm – Support Vector Regression (SVR) and a typical LRHP algorithm – Extreme Learning Machine (ELM). The study reported here paves a new road to simply and more efficiently solve any supervised learning problem.

Invited Track Speakers

Constraints and Combinations: Jasmin Christian Blanchette
Jasmin Blanchette Jasmin Christian Blanchette joined the Chair for Logic and Verification at the Technische Universität München, Germany in year 2008. He obtained his PhD in 2012 under supervision of Prof. Tobias Nipkow. From 2000 to 2008 he worked as software engineer and documentation manager for Trolltech in Oslo, Norway. His research focuses on the generation of counterexamples for higher-order logic (Nitpick) and the use of first-order automated theorem provers in a higher-order theorem prover (Sledgehammer).

Isabelle/HOL – Interaction and Automation

Isabelle/HOL is a popular proof assistant based on higher-order logic. It owes its success to its ease of use and powerful automation. Much of the automation is performed by external tools: The metaprover Sledgehammer relies on automatic theorem provers for its proof search, and the counterexample generator Nitpick relies on SAT solvers. Together with the Isar structured proof format and a new asynchronous user interface, these tools have radically transformed the Isabelle user experience. Recently, Isabelle has been used to verify the seL4 microkernel, to formalize the Java and C++ memory models, and even to formalize a proof of God's existence.

Data Modeling and Analysis: Sang-Wook Kim
Sang-Wook Kim Sang-Wook Kim earned his PhD in 1994 from the Korea Advanced Institute of Science and Technology (KAIST) at Daejeon, Korea. From 1995 to 2003, he served as an Associate Professor of the Division of Computer, Information, and Communications Engineering at Kangwon National University, Chunchoen, Kangwon, Korea. Currently, he is a full professor in Division of Computer Sciences and Engineering in Hanyang University, Korea. Professor Kim is an author of over 100 papers in refereed international journals and conference proceedings. His research interests lie in DBMSs, data mining, web data analysis, and social network analysis. Professor Kim is a member of the ACM and the IEEE.

Link-Based Similarity Measures in Scientific Literature Data: Methods, Performance, and Applications

As the number of people who use scientific literature databases such as google scholar, citeSeer, and Microsoft academic search grows, the demand for literature retrieval services has been steadily increasing. One of the most popular retrieval services is to find a set of papers similar to a paper under consideration, which requires a measure that computes similarity between papers. Scientific literature databases exhibit two interesting characteristics that are different from general databases. First, the papers cited by old papers are often not included in the database due to technical and economic reasons. Second, a few papers cite recently-published papers. These two characteristics cause existing similarity measures to fail in at least one of the following cases: (1) measuring the similarity between old, but similar papers, (2) measuring the similarity between recent, but similar papers, and (3) measuring the similarity between two similar papers: one old and the other recent. In this talk, we address link-based similarity measures that take into account the characteristics of scientific literature databases. The talk consists of the following parts: (1) Previous and our methods for computing link-based similarity, (2) performance comparisons of link-based similarity measures, and (3) some other interesting applications to which link-based similarity measures can be applied.

Information Security and Cryptography: Christoph Sorge
Christoph Sorge Christoph Sorge obtained his PhD from the University of Karlsruhe, Germany in 2007. Topic of his doctoral thesis was the development of distributed recommendation systems and the corresponding ensurance of data protection. From 2008 to 2010 he was research scientist in NEC Laboratories Europe in Heidelberg, Germany. Since 2010 he is Junior Professor at the University of Paderborn, Germany. His research interests include applications of cryptography in network security, privacy enhancing technologies, and legal aspects of security and privacy.

The Interplay between Privacy, Cryptography and Law

The talk illustrates the relation between privacy, cryptography, and law, and shows that an interdisciplinary perspective on these concepts leads to relevant research questions.

Privacy and law

Privacy has been defined by Westin [7] as "the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others". Law defines, among others, the rights of individuals in their interactions with other individuals and institutions, including companies and the state. This includes the right to privacy: Law defines to what extent individuals' privacy has to be protected. Privacy (or "data protection") legislation can also require the use of technical, including cryptographic, measures to achieve privacy protection.

Privacy and cryptography

We can derive some aspects of the relation between privacy and cryptography from Westin's definition:

  • Cryptography can be applied to ensure confidentiality of information (about "individuals, groups, or institutions"). Encryption functions directly serve this purpose; other cryptographic functions, such as authentication protocols, are also related to this goal as they help making sure that no unauthorized party gets access to information.
  • Cryptographic research can reduce the amount of information disclosed by cryptographic protocols themselves. For example, anonymous credential schemes [1] enable proof of an authorization without revelation of the prover's identity. This way, while information is revealed, it is no longer information about a specific individual.

In case the application of cryptographic schemes or other technical measures is not already required by law, it can be seen as a complementary measure: in case legal protection is considered insufficient, individuals can protect themselves (e.g. by using anonymization networks like Tor [2]).

Cryptography and Law

Cryptographic schemes are used for a large number of business and e-government transactions. Electronic signatures are a prime example. The term is used (in the legal context, like in the European Directive 1999/93/EC on Electronic Signatures) to designate data that are used to sign, or authenticate, a document. In principle, even a scanned handwritten signature or a typed name can constitute an electronic signature. For electronic signatures to be secure in a technical sense, (cryptographic) digital signature schemes have to be used. In that case, courts have to decide, among others, whether a digital signature is authentic or not. The security of cryptographic schemes themselves is not sufficient for this decision, as the mapping of public keys to persons, and the secure storage of private keys, are not part of typical digital signature scheme definitions. These gaps have to be filled, and the precise requirements for digital signatures to become acceptable as evidence to be defined, by law.

The approaches of the European Union and the United States are vastly different in this respect. The stricter and more detailed regulation in the European Union may help to increase confidence in electronic signatures, but may also be considered as an obstacle to their acceptance. The legal classification of identity-based signatures [4] serves as an example to illustrate the relation between cryptography and law. As it turns out, legal requirements can pose interesting challenges for the development of new identity-based signature schemes.

Privacy, Cryptography, and Law

Cryptographic and legal research in the field of privacy may benefit from each other at least in two ways:

  • Trends in regulation lead to new challenges for cryptography. For example, privacy implications of smart metering systems were discussed in the legal community, and these discussions prompted the development of technical schemes (beyond the establishment of secure transport channels) to reduce the disclosure of personal data (e.g.[6]).
  • Both research communities try to find suitable definitions (or measurements) for anonymity and privacy. One example is the question under which circumstances the use of pseudonyms constitutes a sufficient protection of privacy – as, in a number of cases, the use of pseudonyms has been proven insufficient [3,5].

References

  1. Jan Camenisch and Anna Lysyanskaya. An efficient system for non-transferable anonymous credentials with optional anonymity revocation. In Birgit Pfitzmann, editor, Advances in Cryptology – EUROCRYPT 2001, volume 2045 of Lecture Notes in Computer Science, pages 93-118. Springer Berlin Heidelberg, 2001.
  2. Roger Dingledine, Nick Mathewson, and Paul Syverson. Tor: The second-generation onion router. In Proceedings of the 13th Conference on USENIX Security Symposium - Volume 13, SSYM'04, pages 21-21, Berkeley, CA, USA, 2004. USENIX Association.
  3. Frank McSherry and Ilya Mironov. Differentially private recommender systems: Building privacy into the net. In Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '09, pages 627-636, New York, NY, USA, 2009. ACM.
  4. Christoph Sorge. The legal classification of identity-based signatures. Cryptology ePrint Archive, Report 2013/271, 2013. http://eprint.iacr.org/.
  5. Latanya Sweeney. K-anonymity: A model for protecting privacy. Int. J. Uncertain. Fuzziness Knowl.-Based Syst., 10(5):557-570, October 2002.
  6. Benjamin Vetter, Osman Ugus, Dirk Westhoff, and Christoph Sorge. Homomorphic primitives for a privacy-friendly smart metering architecture. In Proceedings of the International Conference on Security and Cryptography (SECRYPT 2012), 2012.
  7. Alan F. Westin. Privacy and Freedom. Atheneum, New York, 1967.

MACIS 2013






Imprint-Dataprotection