Webpage for the microlensing hack session at the CCA, January 30 -- February 1 2019.

Home | Pages |

*This page will continue to be developed as the Hack Session approaches*

A sample of project ideas for the Hack Session. These are meant to be
representative of the *types* of problems people might be interested
in working on and to help organize ideas about any leg-work that might
be necessary prior to the Hack Session. Projects will be pitched at
the Hack Session. Feel free to draw from this list or to pitch your
own project.

The listing below is more complete than the sidebar (depending on the size of your screen). However, the links may link to a point just past the heading (so you may need to scroll up a bit).

Modeling Complex Events: General

- Solving Unsolved Events
- Weighting Degenerate Solutions
- Alternative algorithms for microlensing modeling

Modeling Complex Events: Triple Lenses

- [Overview][#overview]
- Triple Lenses: Calculating the Magnification
- Triple Lenses: Finding All Relevant Solutions

- Ray-Shooting Algorithm
- Map-making code
- Jupyter Binary Fitting
- Create a Galaxy Zoo-type project for UKIRT data

Radek Poleski

Finding the microlensing model that well fits the data is a challenging problem. There are many types of degeneracies that have already been found. Models of some of the events were published and later better models were found. There are also events that have not been published because the modeling efforts did not reveal the model that well fits the data. In other cases, there are models with significantly different properties and it is unclear which model is more probable. It is possible that some physical effect was ignored in previous modeling and including it will lead to significant improvement. This project aims to find proper models for these unsolved events.

We will focus on events for which significant modeling efforts were already performed. The best-suited event is ob08270. We previously considered other events, but it turns out they are already solved: Gaia16aye (Wyrzykowski et al. 2017) and ob110417 (Bachelet et al. 2018). We are looking for other unsolved events with non-proprietary datasets.

The Data Analysis Challenge is currently on-going. The submission deadline is Oct 31. Before the hack session, we should now which events were particularly problematic for challenge participants. For these events we will know the correct model. We will search for the algorithm that does not use this knowledge and finds correct model efficiently.

- Obtaining data for unsolved events.
- Finding problematic events in Data Challenge.

Clement Ranc, David Bennett and volunteers?

- Sampling the posterior distribution for a microlensing event that have several degenerate solutions.
- Comparing the marginal distribution on few given parameters derived from a regular Metropolis-Hastings algorithm with distributions derived from more optimized algorithms.

Most analyses presently do little more than weighting the solutions by the chi^2 to weighting degenerate microlensing model solutions. However, these degenerate binary lens solutions must be taken into account in statistical analyses, including on the planets demography inferred from microlensing detections.

Weighting different microlensing models using the chi^2 works in cases where the chi^2 surface at each minima has a similar shape or volume, but with very different chi^2 minima shapes it does not work. We need methods to properly sample all the relevant chi^2 minima.

Optimized algorithms that might be tested: hamiltonian MCMC, a hierarchical bayesian algorithm, nested sampling or …?

- Public code(s) to generate binary-lens microlensing light curves (pyLIMA, MulensModel, muLAn).
- Posterior distribution from an already published paper, or for any event we would like to consider. Example of events: MB07192 (Bennett+2008), OB11950, OB110173 (Poleski+2018, study that already includes discussion about exploration methods and use of Nested Sampling).
- Optimized MCMC algorithms available and already tested on these events.

- Use and test of optimized MCMC algorithms.
- Modeling of not already published data.

- Microlensing friends come with a working code that compute light curves for a given set of parameters.
- Statistics friends come with codes that explore high-dimensional parameters space, or degenerate parameter space.
- Both try to connect their codes to use the resulting code on one event they have chosen (might be from a short-list of chosen events).

Bennett+2008 (2008ApJ…684..663B) Poleski+2018 (2018arXiv180500049P)

Etienne Bachelet and friends

- Review of alternative algorithms to find solutions : genetic algorithms, template matching
- Exploring new ways to find solutions efficiently and reliably : machine learning, nested sampling, hamiltonian MCMC….

While the standard grid method is extremely intensive in terms of CPU and already show some weaknesses to find the global minima, it exists alternative that appear competitive. The goal of this project is to provide lightcurves to be examinate and fit with alternative algorithms. A tutorial notebook will be given to easily provide magnification functions and assimilate. Then, anyone would be able to investigate his own approach.

- Public code(s) to generate binary-lens microlensing light curves (pyLIMA, MulensModel, muLAn).
- Simulated sample of WFIRST-like lightcurves

The pyLIMA team will provide the requirements.

Triple lenses are an unsolved problem in microlensing of incredible scientific interest. These are systems consisting of three bodies, which may correspond to stars with more than one planet or planets in binary star systems. In the latter case, microlensing has already found two such systems with very different properties from the circumbinary planets found by Kepler. At the same time, these investigations have revealed many complexities in modeling triple lens events, such as degeneracies between a two planet-one star scenario and the one planet-two stars scenario. Furthermore, the published work has necessarily been limited to events that have been solved. There remain several unsolved microlensing events that may be due to triple lenses for which solutions have not been found.

Writing the lens equation is simple:

XXX

Solving this equation for the magnification of the source star (particularly of finite extent) as a function of time is computationally expensive. This equation can be written as a 10th order complex polynomial that can be solved numerically. Typically, this is done using XXX, but it is unstable. Alternatively, one can use a Monte Carlo to map light rays from the lens plane to the source plane and count the fraction that fall on the face of the source. This is extremely computationally expensive and if the grid on the lens plane is too sparse, it is easy to miss small images, which leads to inaccuracies in the magnification calculation.

In addition, a triple lens model is defined by N parameters, which define a highly-correlated likelihood space with multiple minima. This makes an exhaustive search of parameter space computationally intractable. So far, known triple lens solutions have generally been limited to those that are closely approximated by the linear sum of two two-body solutions. However, many more complex morphologies are known.

Thus, there are two distinct problems to work on.

Discuss numerical methods to robustly solve the three-body lens equation.

The main issue here is the solution of the 10th order polynomial equation that yields the solutions of the lens equation. I often find that this requires calculations in quadruple precision which can slow down the code significantly. More efficient polynomial solvers or coordinate transformation that can avoid the need for quadruple precision could really help.

Discuss methods for efficiently and sufficiently searching parameter space for solutions to light curves created by 3 bodies (minimum of 9 correlated parameters).

- ob06109: 1 star, 2 planet lenses, modeling easy
- ob08270: satisfactory solution not yet found. Likely 1 star, 1 plane, and 1 planet or star lens
- ob07349: circumbinary planet
- ob08092: 2 stars, 1 planet, wide separations, modeling trivial
- ob120026: 1 star, 2 planet lenses
- ob130341: 2 stars, 1 planet - some ambiguity about whether the planet orbits 1 or 1 stars
- ob141722: 1 star, 2 planets - modeling trivial
- ob160613: multiple solutions - not all in the published paper, some are binary source, binary lens MOA data covers 1st caustic crossing, but is not in paper.

Some of these have data available on the Exoplanet Archive.

2 of the more difficult events include data that is non-public. We could try to make this data public by the time of the Hack Session, or at least release it to participants. These are:

- ob07270: proprietary data from OGLE, uFUN, and PLANET
- ob160613: proprietary data from MOA (other data is published).

- (http://adsabs.harvard.edu/abs/2015ApJ…806…63D)
- (http://adsabs.harvard.edu/abs/2015ApJ…806…99D)
- papers corresponding to the planets above

- Obtaining data for these events
- Some are on the Exoplanet Archive
- Some are published but not (yet) on the Archive
- Some are proprietary –> Need publication agreement

- Where is the snow line?
- What are the theoretical explanations for the planets we see?
- What the theoretical expectations for what we should see?
- Projecting known planet populations into WFIRST detection space.

- How do microlensing planet populations compare with those inferred (or to be inferred) with TESS or Gaia?

Radek Poleski

Write a module for *MulensModel* to calculate the magnification
using the ray-shooting method.

One difficulty in accurately calculating the magnification of a finite source is to locate all of the images, especially if one or more of them is small. Using the ray-shooting method, we know that if one light ray falls on one side of the source and another one falls on the other side, there must be a light ray in between that falls directly on the source. A method that incorporates this information could result in gains in efficiency and accuracy over other methods by identifying all of the images and only shooting rays densely in the regions of those images.

- Write use cases.
- Write unit tests.
- Write module.

- All of the issues listed for the Magnification Map Project.
- The description needs to be written and revised by Radek.
- We can’t do both this and the Magnification Map because both require Radek.

Radek Poleski and Jennifer Yee

Write a module to calculate the magnification of a binary lens using the map-making method.

Map-making is one method for numerically calculating the magnification of a finite source. This is essentially a Monte Carlo method in which ~10^7 light rays are “shot” from the image (lens = planet host star) plane and collected and counted on the source plane. The magnification of the source is then proportional to the number of rays contained within the boundary of the source. Typical routines use a hexagonal grid on the source plane with many resolution elements relative to the size of the source.

- Define use cases
- Write a use case for finding best-fit parameters for a high-magnification event.
- Discuss various features of map-making code, e.g.
- ability to plot the resulting magnification map
- fixed s, q
- usually used for high-magnification events because the relevant areas in both the source and image planes are well known.

- Outline the parameters, methods, and interface with
*MulensModel* - Consider avenues for future development (which might affect the
architecture), e.g.
- expanding the map if the original definition was too small

- Write unit tests
- Write methods

- Need to identify a test data set/set up some comparisons to existing magnifcation calculations.
- What language should it be written in? Maybe prototype in Python and then re-write in C? (We would need a C programmer)
- Need to prepare a brief presentation explaining the physics.
- Do we start with the high-mag case?
- Do we write some kind of checker for numerical accuracy?
- How much of the use cases need to be done
*before*the Hack Session?

Create a *Jupyter* notebook using *MulensModel* that robustly finds
solutions to binary microlensing events.

- Maybe one for identifying point lenses that need more parameters?

TBD

Create a Galaxy Zoo project to classify UKIRT light curves

The Galaxy Zoo platform uses “citizen scientists” to classify objects in large datasets. We could use this platform to create a project to classify UKIRT light curves as microlensing/not microlensing, regular/irregular variable, flat, and anomalous/point lens microlensing. Having a large set of human-classified objects would give us a check on the completeness and reliability of the algorithms used to identify microlensing events. For example, the algorithm used for UKIRT events is designed for finding point-source–point-lens events. Although it does find binary lenses, we don’t know how complete it is for such objects. We could also consider injecting fake microlensing events into the UKIRT dataset to better understand our completeness.