This page will continue to be developed as the Hack Session approaches
A sample of project ideas for the Hack Session. These are meant to be representative of the types of problems people might be interested in working on and to help organize ideas about any leg-work that might be necessary prior to the Hack Session. Projects will be pitched at the Hack Session. Feel free to draw from this list or to pitch your own project.
The listing below is more complete than the sidebar (depending on the size of your screen). However, the links may link to a point just past the heading (so you may need to scroll up a bit).
Finding the microlensing model that well fits the data is a challenging problem. There are many types of degeneracies that have already been found. Models of some of the events were published and later better models were found. There are also events that have not been published because the modeling efforts did not reveal the model that well fits the data. In other cases, there are models with significantly different properties and it is unclear which model is more probable. It is possible that some physical effect was ignored in previous modeling and including it will lead to significant improvement. This project aims to find proper models for these unsolved events.
We will focus on events for which significant modeling efforts were already performed. The best-suited event is ob08270. We previously considered other events, but it turns out they are already solved: Gaia16aye (Wyrzykowski et al. 2017) and ob110417 (Bachelet et al. 2018). We are looking for other unsolved events with non-proprietary datasets.
The Data Analysis Challenge is currently on-going. The submission deadline is Oct 31. Before the hack session, we should now which events were particularly problematic for challenge participants. For these events we will know the correct model. We will search for the algorithm that does not use this knowledge and finds correct model efficiently.
Clement Ranc, David Bennett and volunteers?
Most analyses presently do little more than weighting the solutions by the chi^2 to weighting degenerate microlensing model solutions. However, these degenerate binary lens solutions must be taken into account in statistical analyses, including on the planets demography inferred from microlensing detections.
Weighting different microlensing models using the chi^2 works in cases where the chi^2 surface at each minima has a similar shape or volume, but with very different chi^2 minima shapes it does not work. We need methods to properly sample all the relevant chi^2 minima.
Optimized algorithms that might be tested: hamiltonian MCMC, a hierarchical bayesian algorithm, nested sampling or …?
Bennett+2008 (2008ApJ…684..663B) Poleski+2018 (2018arXiv180500049P)
Etienne Bachelet and friends
While the standard grid method is extremely intensive in terms of CPU and already show some weaknesses to find the global minima, it exists alternative that appear competitive. The goal of this project is to provide lightcurves to be examinate and fit with alternative algorithms. A tutorial notebook will be given to easily provide magnification functions and assimilate. Then, anyone would be able to investigate his own approach.
The pyLIMA team will provide the requirements.
Triple lenses are an unsolved problem in microlensing of incredible scientific interest. These are systems consisting of three bodies, which may correspond to stars with more than one planet or planets in binary star systems. In the latter case, microlensing has already found two such systems with very different properties from the circumbinary planets found by Kepler. At the same time, these investigations have revealed many complexities in modeling triple lens events, such as degeneracies between a two planet-one star scenario and the one planet-two stars scenario. Furthermore, the published work has necessarily been limited to events that have been solved. There remain several unsolved microlensing events that may be due to triple lenses for which solutions have not been found.
Writing the lens equation is simple:
Solving this equation for the magnification of the source star (particularly of finite extent) as a function of time is computationally expensive. This equation can be written as a 10th order complex polynomial that can be solved numerically. Typically, this is done using XXX, but it is unstable. Alternatively, one can use a Monte Carlo to map light rays from the lens plane to the source plane and count the fraction that fall on the face of the source. This is extremely computationally expensive and if the grid on the lens plane is too sparse, it is easy to miss small images, which leads to inaccuracies in the magnification calculation.
In addition, a triple lens model is defined by N parameters, which define a highly-correlated likelihood space with multiple minima. This makes an exhaustive search of parameter space computationally intractable. So far, known triple lens solutions have generally been limited to those that are closely approximated by the linear sum of two two-body solutions. However, many more complex morphologies are known.
Thus, there are two distinct problems to work on.
Discuss numerical methods to robustly solve the three-body lens equation.
The main issue here is the solution of the 10th order polynomial equation that yields the solutions of the lens equation. I often find that this requires calculations in quadruple precision which can slow down the code significantly. More efficient polynomial solvers or coordinate transformation that can avoid the need for quadruple precision could really help.
Discuss methods for efficiently and sufficiently searching parameter space for solutions to light curves created by 3 bodies (minimum of 9 correlated parameters).
Some of these have data available on the Exoplanet Archive.
2 of the more difficult events include data that is non-public. We could try to make this data public by the time of the Hack Session, or at least release it to participants. These are:
Write a module for MulensModel to calculate the magnification using the ray-shooting method.
One difficulty in accurately calculating the magnification of a finite source is to locate all of the images, especially if one or more of them is small. Using the ray-shooting method, we know that if one light ray falls on one side of the source and another one falls on the other side, there must be a light ray in between that falls directly on the source. A method that incorporates this information could result in gains in efficiency and accuracy over other methods by identifying all of the images and only shooting rays densely in the regions of those images.
Radek Poleski and Jennifer Yee
Write a module to calculate the magnification of a binary lens using the map-making method.
Map-making is one method for numerically calculating the magnification of a finite source. This is essentially a Monte Carlo method in which ~10^7 light rays are “shot” from the image (lens = planet host star) plane and collected and counted on the source plane. The magnification of the source is then proportional to the number of rays contained within the boundary of the source. Typical routines use a hexagonal grid on the source plane with many resolution elements relative to the size of the source.
Create a Jupyter notebook using MulensModel that robustly finds solutions to binary microlensing events.
Create a Galaxy Zoo project to classify UKIRT light curves
The Galaxy Zoo platform uses “citizen scientists” to classify objects in large datasets. We could use this platform to create a project to classify UKIRT light curves as microlensing/not microlensing, regular/irregular variable, flat, and anomalous/point lens microlensing. Having a large set of human-classified objects would give us a check on the completeness and reliability of the algorithms used to identify microlensing events. For example, the algorithm used for UKIRT events is designed for finding point-source–point-lens events. Although it does find binary lenses, we don’t know how complete it is for such objects. We could also consider injecting fake microlensing events into the UKIRT dataset to better understand our completeness.