My name is Tim and I'm the head of Analytics at Sympany (a foundation located in Basel, Switzerland dedicated to improving the Swiss healthcare system). My group and I are creating smart data products based on internal data, external data, unstructured data and API data. The goal is to provide actionable insights for our customers through the application of various machine learning methods.

I love reading and traveling. During my studies I spend a considerable amount of my study time abroad through two international exchanges (one year at the University of Limerick, Ireland and 6 months at the Université de Lille, France). I also did NGO work in the SOS Children Villages in Ethiopia and I speak German, French, English and a little bit of Spanish.

I believe in the importance of a livelong education (learning as well as teaching). I'm holding an economics degree from the Universität Zürich and a statistics degree from the Université de Neuchâtel. More recently I finished a part-time two-year data science program at Harvard. I also teach business and economics during economy weeks to high school students.

Some Projects

with links and catchy titles

Head of Sanitas Active

Mar 2015 - Jul 2017

The Active App is designed to help Sanitas customers to get more exercise and eat healthily. It counts steps and tracks a users activity while cycling or swimming. It uses various nudging mechanisms to encourage its users to maintain a healthy lifestyle.

Lead Data Scientist for Baloise Plus

Jul 2015 - Mar 2015

Baloise Plus is the new benefits program for Baloise customers. The more contracts customers are holding the larger their benefits. I was the lead data scientist on the project.


Project Lead Baloise Risk Map

Aug 2014 - Jan 2015

The interactive risk map for burglary and theft is a new tool for Baloise customers, insurance agents and actuaries to analyse the individual risk distribution at a cantonal as well as municipal level. It is build based on internal data, external data from the federal office of statistics and on top of the D3.js framework.

Paper: Algorithmic Modelling in the Insurance Industry

Feb 2012 - Jun 2012

The paper examines the classification and regression tree (CART) algorithm as a tool to facilitate the process of model specification needed by GLM's when modelling premiums, i.e., to use CART for variable selection, interaction detection and preliminary understanding of the data.

Paper: Earning While Learning - When and How Student Employment is Beneficial

Aug 2009 - Mar 2010

In the paper published in "Labour" we examined how different student employment statuses during tertiary education affect short-term and medium-term labor market returns. Using a representative survey of Swiss graduates of tertiary education, we found significant positive labor market returns of ’earning while learning‘, but only for related student employment and not for unrelated student employment.

Paper:The Economic Consequences of the Irish Famine 1845-1850

Apr 2009 - Sep 2009

The paper examines the short and long run consequences of the Irish Famine of 1845 to 1850. By adopting Krugman’s (1991) two-region model to the Anglo-Irish case, it argued, in line with O'Rourke and other cliometric economists, that the Famine constituted such a shock on the economic fabrique of Ireland that the following Irish industrial backwardness was a result of it.

Harvard University

Professional Graduate Data Science Coursework


CS 171 - Interactive Data Visualization with D3

Jan 2016 - Jun 2016

The course topics included good design practices for data visualizations, methods for visualizing from different sources, and programming of interactive web-based visualizations using D3 (Javascript). All the code to reproduce the results below is on Github.



AC 209A - Advanced Topics in Data Science I

Aug 2016 - Dec 2016

The course was the first part of a series about advanced data science methods. Topics include the analysis of high dimensonal data with linear methods such as lasso and ridge regression, bayesian modeling and sentiment analysis as well ensemble methods such as bagging and boosting. Most of the programming was done in Python. All the code to reproduce the results below is on Github.



AC 209B - Advanced Topics in Data Science II

Jan 2017 - Jun 2017

The course is the second part of a series about advanced data science methods. Topics include the use of non-linear statistical models such as smoothers and GAM, unsupervised learning, and deep learning with neural networks. The major programming languages are R and Python. All the code to reproduce the results below is on Github.



E 63 - Big Data Analytics

Aug 2017 - Dec 2017

The emphasis of this course is on mastering different important big data technologies. The focus lies on Spark, Tensorflow and Streaming technologies which allow analysis of data in flight, that is, in near real time. Furthermore the so-called NoSQL storage solutions exemplified by Cassandra are examined. An additional focus lies on memory-resident databases and graph databases (GraphX and Neo4J) and scalable messaging systems like Kafka and Amazon Kinesis. All the code to reproduce the results below is on Github.

Want more?

Hit me up at any of the links below.

Here is a copy of my CV.