Veranstaltung: Bachelor-Seminar Netz- und Datensicherheit

Nummer:
143241
Lehrform:
Seminar
Medienform:
rechnerbasierte Präsentation
Verantwortlicher:
Prof. Dr. Jörg Schwenk
Dozent:
Prof. Dr. Jörg Schwenk (ETIT)
Sprache:
Deutsch
SWS:
3
LP:
3
Angeboten im:
Wintersemester und Sommersemester

Termine im Wintersemester

  • Vorbesprechung: Dienstag den 18.10.2016 ab 14:15 im ID 04/413
  • Seminar Dienstags: ab 14:15 bis 16.45 Uhr im ID 04/413

Termine im Sommersemester

  • Beginn: Dienstag den 18.04.2017 ab 15:00 im ID 03/471
  • Seminar Dienstags: ab 15:00 bis 16.45 Uhr im ID 03/471

Prüfung

Seminarbeitrag

studienbegleitend

Ziele

Die Teilnehmer können technische und wissenschafltiche Literatur finden, beschaffen verstehen und auswerten.

Inhalt

Ausgewählte Themen der IT-Sicherheit mit Bezug zur Netz- und Datensicherheit werden von den Studierenden eigenständig erarbeitet. Soweit möglich werden Themen in Anlehnung an eine gerade laufende Wahlpflichtveranstaltung gewählt, um didaktische Synergieeffekte zu nutzen.

Voraussetzungen

keine

Empfohlene Vorkenntnisse

Grundlegende Kenntnisse der Kryptographie

Materialien

Folien:

Musterlösungen:

Sonstiges

Diese Veranstaltung wird im Block angeboten.

Vorläufige Termine:

  • Vorbesprechung und Themenvergabe am 18.10.2016 ab 14:15 im ID 04/413. Bitte mel­det euch vor­her bei Juraj Somorovsky, wenn ihr zur Vor­be­spre­chung kom­men wollt.
  • Bewerbung mit einem Exposee: 1.11.2016
  • Acceptence notification: 4.11.2016
  • Abgabetermin einer Preversion der schriftlichen Ausarbeitung: 10.12.2016
  • Präsentationen: wird festgelet
  • Abgabetermin der finalen Version der schriftlichen Ausarbeitung: 20.2.2017
  • Bekanntmachung des AwardGewinners/Meldung der Ergebnisse an das Prüfungsamt: ab Anfang des folgenden Semesters

Hinweis: Es werden keine Teilnahme-/Leistungsscheine ausgestellt. Die Ergebnisse werden direkt an das Prüfungsamt gemeldet.

Fragen (Kontakt): Juraj Somorovsky (juraj.somorovsky[at]rub.de)

Ausarbeitungen: Beispiele: http://nds.rub.de/teaching/BestStudentPaperAward/ Vorlage: http://nds.rub.de/teaching/theses/seminar/

Anmerkungen:

Ziel des Seminars ist die Vorstellung einer wissenschaftlichen Veröffentlichung. Hierzu werden bereits veröffentliche Artikel zur Auswahl angeboten.

Die Seminarteilnehmer sollen die Veröffentlichung im Rahmen des Seminars verständlich erarbeiten und evtl. benötigte Grundlagen kurz und präzise einführen.

Vor der Zuteilung des vorausgewählten Seminarthemas ist von allen Kandidaten für das Seminarthema ein zweiseitiges Exposee beim jeweiligen Betreuer einzureichen. Dieser wählt anhand der Exposees den Kandidaten aus der das Seminarthema bearbeitet.

Die Ausarbeitung sollte einen Umfang von ca. 15 Seiten haben, Ausnahmen oder Abweichungen sind mit dem jeweiligen Betreuer abzustimmen. Vor dem Präsentationstermin muss dem Betreuer eine Preversion der schriftlichen Ausarbeitung vorliegen. Diese wird durch den jeweiligen Betreuer einmalig korrigiert. Die Korrekturen sind in die finale Version der Ausarbeitung einzuarbeiten.

Ein Seminarvortrag umfasst üblicherweise 20-30 Minuten, einschließlich einer anschließenden Fragerunde. Das Foliendesign sowie die Vortragssprache (deutsch, englisch) sind freigestellt. Bitte reichen Sie Ihre Ausarbeitung und Präsentation im PDF Format ein. Fragen und Korrekturen durch die Betreuer sind während des Vortrags möglich, sofern Nachbesserungs- oder Klärungsbedarf besteht.

Es besteht an jedem Termin Anwesenheitspflicht, die über eine Anwesenheitsliste erfasst wird. Kann ein Termin nicht wahrgenommen werden, ist dies rechtzeitig vorher dem jeweiligen Betreuer und dem Seminarverantwortlichen mit Begründung bekannt zu geben. Die versäumten Vorträge müssen eigenständig beschafft und nachgearbeitet (ca. 1 Seite pro Vortrag) werden und sind dem jeweiligen Betreuer binnen 2 Wochen zu zusenden.

Angebotene Themen:

Arnhold  

The Million-Key Question - Investigating the Origins of RSA Public Keys

Can bits of an RSA public key leak information about design and implementation choices such as the prime generation algorithm? We analysed over 60 million freshly generated key pairs from 22 open- and closedsource libraries and from 16 different smartcards, revealing significant leakage. The bias introduced by different choices is sufficiently large to classify a probable library or smartcard with high accuracy based only on the values of public keys. Such a classification can be used to decrease the anonymity set of users of anonymous mailers or operators of linked Tor hidden services, to quickly detect keys from the same vulnerable library or to verify a claim of use of secure hardware by a remote party. The classification of the key origins of more than 10 million RSA- based IPv4 TLS keys and 1.4 million PGP keys also provides an independent estimation of the libraries that are most commonly used to generate the keys found on the Internet.

Link: https://www.usenix.org/node/197198

Felsch
Rimkus  

An In-Depth Study of More Than Ten Years of Java Exploitation

When created, the Java platform was among the ?rst runtimes designed with security in mind. Yet, numerous Java versions were shown to contain far-reaching vulnerabilities, permitting denial-of-service attacks or even worse allowing intruders to bypass the runtime's sandbox mechanisms, opening the host system up to many kinds of further attacks.

This paper presents a systematic in-depth study of 87 publicly available Java exploits found in the wild. By collecting, minimizing and categorizing those exploits, we identify their commonalities and root causes, with the goal of determining the weak spots in the Java security architecture and possible countermeasures.

Our ?ndings reveal that the exploits heavily rely on a set of nine weaknesses, including unauthorized use of restricted classes and confused deputies in combination with caller-sensitive methods. We further show that all attack vectors implemented by the exploits belong to one of three categories: single-step attacks, restricted-class attacks, and information hiding attacks.

The analysis allows us to propose ideas for improving the security architecture to spawn further research in this area.

Link: http://www.abartel.net/static/p/ccs2016-10yearsJavaExploits.pdf

Felsch
Jansen  

Alpenhorn: Bootstrapping Secure Communication without Leaking Metadata

Alpenhorn is the first system for initiating an encrypted connection between two users that provides strong privacy and forward secrecy guarantees for metadata (i.e., infor- mation about which users connected to each other) and that does not require out-of-band communication other than knowing the other user’s Alpenhorn username (email address). This resolves a significant shortcoming in all prior works on private messaging, which assume an out- of-band key distribution mechanism. Alpenhorn’s design builds on three ideas. First, Alpen- horn provides each user with an address book of friends that the user can call. Second, when a user adds a friend for the first time, Alpenhorn ensures the adversary does not learn the friend’s identity, by using identity-based en- cryption in a novel way to privately determine the friend’s public key. Finally, when calling a friend, Alpenhorn ensures forward secrecy of metadata by storing pairwise shared secrets in friends’ address books, and evolving them over time, using a new keywheel construction. Alpen- horn relies on a number of servers, but operates in an anytrust model, requiring just one of the servers to be honest. We implemented a prototype of Alpenhorn, and in- tegrated it into the Vuvuzela private messaging system (which did not previously provide privacy or forward se- crecy of metadata when initiating conversations). Exper- imental results show that Alpenhorn can scale to many users, supporting 10 million users on three Alpenhorn servers with an average dial latency of 150 seconds and a client bandwidth overhead of 3.7 KB/sec.

Link: https://davidlazar.org/papers/alpenhorn.pdf

Rösler
free  

Privacy-Preserving Group Data Access via Stateless Oblivious RAM Simulation

We study the problem of providing privacy- preserving access to an outsourced honest-but- curious data repository for a group of trusted users. We show that such privacy-preserving data access is possible using a combination of probabilistic encryp- tion, which directly hides data values, and stateless oblivious RAM simulation, which hides the pattern of data accesses. We give simulations that have only an O(log n) amortized time overhead for simulating a RAM algorithm, A, that has a memory of size n, using a scheme that is data-oblivious with very high probability assuming the simulation has access to a private workspace of size O(n^?), for any given fixed constant ? > 0. This simulation makes use of pseudo- random hash functions and is based on a novel hier- archy of cuckoo hash tables that all share a common stash. We also provide results from an experimental simulation of this scheme, showing its practicality. In addition, in a result that may be of some theo- retical interest, we also show that one can eliminate the dependence on pseudorandom hash functions in our simulation while having the overhead rise to be O(log^2 n).

Link: https://arxiv.org/pdf/1105.4125.pdf

Rösler
Florian P.  

On the Practical (In-)Security of 64-bit Block Ciphers: Collision Attacks on HTTP over TLS and OpenVPN

While modern block ciphers, such as AES, have a block size of at least 128 bits, there are many 64-bit block ciphers, such as 3DES and Blowfish, that are still widely supported in Internet security protocols such as TLS, SSH, and IPsec. When used in CBC mode, these ciphers are known to be susceptible to collision attacks when they are used to encrypt around 232232 blocks of data (the so-called birthday bound). This threat has traditionally been dismissed as impractical since it requires some prior knowledge of the plaintext and even then, it only leaks a few secret bits per gigabyte. Indeed, practical collision attacks have never been demonstrated against any mainstream security protocol, leading to the continued use of 64-bit ciphers on the Internet.

In this work, we demonstrate two concrete attacks that exploit collisions on short block ciphers. First, we present an attack on the use of 3DES in HTTPS that can be used to recover a secret session cookie. Second, we show how a similar attack on Blowfish can be used to recover HTTP BasicAuth credentials sent over OpenVPN connections. In our proof-of-concept demos, the attacker needs to capture about 785GB of data, which takes between 19-38 hours in our setting. This complexity is comparable to the recent RC4 attacks on TLS: the only fully implemented attack takes 75 hours. We evaluate the impact of our attacks by measuring the use of 64-bit block ciphers in real-world protocols. We discuss mitigations, such as disabling all 64-bit block ciphers, and report on the response of various software vendors to our responsible disclosure of these attacks.

Link: http://eprint.iacr.org/2016/798

Horst
Theis  

MitM Attack by Name Collision: Cause Analysis and Vulnerability Assessment in the New gTLD Era

Recently, Man in the Middle (MitM) attacks on web browsing have become easier than they have ever been before because of a problem called “Name Collision” and a protocol called the Web Proxy Auto-Discovery (WPAD) protocol. This name collision attack can cause all web traffic of an Internet user to be redirected to a MitM proxy automatically right after the launching of a standard browser. The underlying problem of this attack is internal namespace WPAD query leakage, which itself is a known problem for years. However, it remains understudied since it was not easily exploitable before the recent new gTLD (generic Top-Level Domains) delegation.

Link: http://www.ieee-security.org/TC/SP2016/papers/0824a675.pdf

Horst
  TBA

Robust Defenses for Cross-Site Request Forgery

Cross-Site Request Forgery (CSRF) is a widely exploited

web site vulnerability. In this paper, we present a new vari- ation on CSRF attacks,login CSRF , in which the attacker forges a cross-site request to the login form, logging the vic- tim into the honest web site as the attacker. The severity of a login CSRF vulnerability varies by site, but it can be as severe as a cross-site scripting vulnerability. We detail three major CSRF defense techniques and find shortcomings with each technique. Although the HTTP Referer header could provide an effective defense, our experimental obser- ration of 283,945 advertisement impressions indicates that the header is widely blocked at the network layer due to pri- vacy concerns. Our observations do suggest, however, that the header can be used today as a reliable CSRF defense over HTTPS, making it particularly well-suited for defend- ing against login CSRF. For the long term, we propose that browsers implement the Origin header, which provides the security benefits of the Referer header while responding to privacy concerns.

Weitere Informationen auf Anfrage (christopher.spaeth@rub.de)

Link: http://seclab.stanford.edu/websec/csrf/csrf.pdf
Späth
Karsten MzS  

DDoSCoin: Cryptocurrency with a Malicious Proof-of-Work

Since its creation in 2009, Bitcoin has used a hashbased proof-of-work to generate new blocks, and create a single public ledger of transactions. The hash-based computational puzzle employed by Bitcoin is instrumental to its security, preventing Sybil attacks and making doublespending attacks more difficult. However, there have been concerns over the efficiency of this proof-of-work puzzle, and alternative “useful” proofs have been proposed.

In this paper, we present DDoSCoin, which is a cryptocurrency with a malicious proof-of-work. DDoSCoin allows miners to prove that they have contributed to a distributed denial of service attack against specific target servers. This proof involves making a large number of TLS connections to a target server, and using cryptographic responses to prove that a large number of connections has been made. Like proof-of-work puzzles, these proofs are inexpensive to verify, and can be made arbitrarily difficult to solve.

Link: https://www.usenix.org/conference/woot16/workshop- program/presentation/wustrow

Somorovsky
Ebert  

Host of Troubles: Multiple Host Ambiguities in HTTP Implementations

The Host header is a security-critical component in an HTTP request, as it is used as the basis for enforcing security and caching policies. While the current speci cation is generally clear on how host-related protocol elds should be parsed and interpreted, we nd that the implementations are prob- lematic. We tested a variety of widely deployed HTTP im- plementations and discover a wide range of non-compliant and inconsistent host processing behaviours. The particu- lar problem is that when facing a carefully crafted HTTP request with ambiguous host elds (e.g., with multiple Host headers), two di erent HTTP implementations often accept and understand it di erently when operating on the same request in sequence. We show a number of techniques to induce inconsistent interpretations of host between HTTP implementations and how the inconsistency leads to severe attacks such as HTTP cache poisoning and security policy bypass. The prevalence of the problem highlights the poten- tial negative impact of gaps between the speci cations and implementations of Internet protocols.

Link: http://www.icir.org/vern/papers/host-of-troubles.ccs16.pdf

Somorovsky