Skip to content

Commit 8288e9a

Browse files
committed
2025
1 parent 5070b77 commit 8288e9a

36 files changed

+1833
-0
lines changed

Diff for: _gsocproposals/2025/README.md

+17
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
## To add a new proposal
2+
3+
* Create a file `proposal_YOURPROJECTyourproposal.md` (look at this example: [`proposal_ROOTspark.md`](https://raw.githubusercontent.com/HSF/hsf.github.io/master/_gsocproposals/2018/proposal_ROOTspark.md)). The following sections are strongly suggested:
4+
5+
* Under `## Description` write the description of the proposal in the context of your project. Try not to make it exhaustive, but rather readable and appealing for potential students. Note that the project duration has to be tuned for 175 hours projects (rather than 350, as in the previous years)
6+
7+
* Under `## Task ideas` enumerate the main ideas for the tasks to be completed for the project to succeed. Keep in mind that the target is a summer student who is not familiar with your project who will be working for 175 hours. Try to make these tasks on realistic and concrete, targeting your project main objectives - your future student will write a proposal with a plan of work built-up upon these.
8+
9+
* Under `## Expected results` enumerate the main objectives that you want to achieve at the end of the summer project to consider the student work successful. It is important to have a realistic and concrete target rather than generic and non-measurable objectives.
10+
11+
* Under `## Evaluation Tasks` give pointers to the information needed by potential candidates to complete the tests required for being allowed to submit a proposal for your project. Do not write a direct link to the tests here, give this only to the students who expressed interest in your project. It is acceptable that in this section you only write the type of test (e.g. C++ algorithm), but you can be more specific if you need to.
12+
13+
* Under `##Requirements` add all the mandatory skills you expect for completing the project
14+
15+
* Under `##Mentor` add the main mentor and at least one more-co-mentor. It is required that a co-mentor is able to replace the main mentor in case of absence and is able to fill the reports required during the program. Add only the names and the e-mails here. Mark the main mentor in bold. The main mentor is responsible for filling up the evaluation forms and exchanging with the admins.
16+
17+
* You should give useful links for the candidates to be able to understand better your project in case they are interested.

Diff for: _gsocproposals/2025/proposal_CMS1.md

+52
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,52 @@
1+
---
2+
title: Deep Learning Inference for mass regression
3+
layout: gsoc_proposal
4+
project: CMS
5+
year: 2025
6+
organization:
7+
- Alabama
8+
- New York University
9+
- Vishwakarma Institute of Technology
10+
---
11+
12+
## Description
13+
14+
One of the important aspects of searches for new physics at the [Large Hadron Collider (LHC)](https://home.cern/science/accelerators/large-hadron-collider) involves the identification and reconstruction of single particles, jets and event topologies of interest in collision events. The End-to-End Deep Learning (E2E) project in the CMS experiment focuses on the development of these reconstruction and identification tasks with innovative deep learning approaches.
15+
16+
One of the main objectives of the CMS experiments research and development towards high-luminosity LHC is to incorporate cutting-edge machine learning algorithms for particle reconstruction and identification into the CMS software framework (CMSSW) data processing pipeline. This project will focus on the integration of E2E framework with the [CMSSW](https://github.com/cms-sw/cmssw) inference engine for use in reconstruction algorithms in offline and high-level trigger systems of the [CMS](https://home.cern/science/experiments/cms) experiment.
17+
18+
## Duration
19+
20+
Total project length: 175/350 hours.
21+
22+
## Difficulty level
23+
Intermediate
24+
25+
## Task ideas
26+
* Development of end-to-end deep learning regression for particle property measurements
27+
* Test and integration into CMSSW
28+
29+
30+
## Expected results
31+
* Extension of currently integrated E2E CMSSW prototype to include the regression model inference
32+
33+
34+
## Requirements
35+
C++, Python, PyTorch and some previous experience in Machine Learning.
36+
37+
<!-- ## Test
38+
Please use [this link](https://docs.google.com/document/d/1QuG0Ho3pWsJGMx0fG969aBNfgPg-cDxU9w33ZuDEBng/edit?usp=sharing) to access the test for this project. -->
39+
40+
## Mentors
41+
* [Ruchi Chudasama](mailto:[email protected]) (University of Alabama)
42+
* [Shravan Chaudhari](mailto:[email protected]) (New York University)
43+
* [Sergei Gleyzer](mailto:[email protected]) (University of Alabama)
44+
* [Purva Chaudhari](mailto:[email protected]) (Vishwakarma Institute of Technology)
45+
46+
47+
Please **DO NOT** contact mentors directly by email. Instead, please email [[email protected]](mailto:[email protected]) with Project Title and **include your CV** and **test results**. The mentors will then get in touch with you.
48+
49+
50+
51+
## Links
52+
* [Paper 1](https://arxiv.org/abs/2309.14254)

Diff for: _gsocproposals/2025/proposal_CMS2.md

+47
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
---
2+
title: Graph Neural Networks for Particle Momentum Estimation in the CMS Trigger System
3+
layout: gsoc_proposal
4+
project: CMS
5+
year: 2025
6+
organization:
7+
- Alabama
8+
- AUB
9+
- Florida
10+
11+
---
12+
13+
## Description
14+
15+
CMS experiment currently uses machine learning algorithms at the Level-1 (hardware) trigger to estimate the momentum of traversing particles such as Muons. The first algorithm implemented in the trigger system was a discretized boosted decision tree. Currently, CMS is studying the use of deep learning algorithms at the trigger level that requires microsecond level latency and therefore requires highly optimized inference.
16+
17+
This project will focus on implementation and benchmarking of deep learning algorithms for the trigger inference task.
18+
19+
## Duration
20+
21+
Total project length: 175/350 hours.
22+
23+
## Task ideas
24+
* Development and Benchmarking of graph networks (GNN) for momentum regression in the trigger system
25+
26+
## Expected results
27+
* Benchmarks of deep network model inference for muon momentum assignment for prompt and displaced particles
28+
29+
<!-- ## Test
30+
Please use [this link](https://docs.google.com/document/d/1QuG0Ho3pWsJGMx0fG969aBNfgPg-cDxU9w33ZuDEBng/edit?usp=sharing) to access the test for this project. -->
31+
32+
## Requirements
33+
Python, C++, and some previous experience in Machine Learning.
34+
35+
## Mentors
36+
* [Suzanne Rozenzweig](mailto:[email protected]) (University of Florida)
37+
* [Efe Yigibasi](mailto:[email protected]) (University of Florida)
38+
* [Darin Acosta](mailto:[email protected]) (University of Florida)
39+
* [Sergei Gleyzer](mailto:[email protected]) (University of Alabama)
40+
* [Ali Hariri](mailto:[email protected]) (American University Beirut)
41+
42+
43+
Please **DO NOT** contact mentors directly by email. Instead, please email [[email protected]](mailto:[email protected]) with Project Title and **include your CV** and **test results**. The mentors will then get in touch with you.
44+
45+
46+
## Links
47+
* [Paper ](https://iopscience.iop.org/article/10.1088/1742-6596/1085/4/042042)

Diff for: _gsocproposals/2025/proposal_CMS3.md

+46
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,46 @@
1+
---
2+
title: Event Classification With Masked Transformer Autoencoders
3+
layout: gsoc_proposal
4+
project: CMS
5+
year: 2025
6+
organization:
7+
- Alabama
8+
- NISER
9+
---
10+
11+
## Description
12+
13+
One of the key tasks in particle physics analyses is proper classification of particle collision events based on the parent particles and the process that produced them. To handle this task, we’re developing a flexible machine learning pipeline which can be applied to a broad range of classification tasks. We’ll leverage a mix of older and newer techniques for transformer models like masking, pretraining using autoencoder architectures, and cross attention of task-specific attention heads.
14+
15+
## Duration
16+
17+
Total project length: 175/350 hours.
18+
19+
## Task ideas
20+
* Develop a scalable transformer encoder model with task-specific attention heads combined using a cross attention mechanism
21+
* Improve existing code pipeline with features like multi-GPU parallelism and flexible preprocessing and analysis options
22+
* Deploy the developed models and pipeline on simulated physics data and analyze performance gains and changes in model understanding from the techniques used
23+
24+
<!-- ## Test
25+
Please use [this link](https://docs.google.com/document/d/1QuG0Ho3pWsJGMx0fG969aBNfgPg-cDxU9w33ZuDEBng/edit?usp=sharing) to access the test for this project. -->
26+
27+
## Requirements
28+
Significant experience in Python and Machine Learning in Pytorch. Preferably some experience with Transformers and multi-GPU parallelization or with the ROOT library developed by CERN.
29+
30+
## Difficulty Level
31+
Advanced
32+
33+
## Mentors
34+
* [Eric Reinhardt](mailto:[email protected]) (University of Alabama)
35+
* [Diptarko Choudhury](mailto:[email protected]) (NISER)
36+
* [Ruchi Chudasama](mailto:[email protected]) (University of Alabama)
37+
* [Emanuele Usai](mailto:[email protected]) (University of Alabama)
38+
* [Sergei Gleyzer ](mailto:[email protected]) (University of Alabama)
39+
40+
41+
Please **DO NOT** contact mentors directly by email. Instead, please email [[email protected]](mailto:[email protected]) with Project Title and **include your CV** and **test results**. The mentors will then get in touch with you.
42+
43+
44+
## Links
45+
* [Blog Post 1](https://medium.com/@eric0reinhardt/gsoc-2023-with-ml4sci-reconstruction-and-classification-of-particle-collisions-with-masked-bab8b38958df)
46+
* [Paper 1](https://arxiv.org/abs/2401.00452)

Diff for: _gsocproposals/2025/proposal_DEEPLENSE1.md

+57
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,57 @@
1+
---
2+
title: Superresolution for Strong Gravitational Lensing
3+
layout: gsoc_proposal
4+
project: DEEPLENSE
5+
project size: 175hr/350hr
6+
year: 2025
7+
organization:
8+
- Alabama
9+
- Brown
10+
- Paris
11+
- RWTH
12+
---
13+
14+
## Description
15+
16+
Strong gravitational lensing is a promising probe of the substructure of dark matter to better understand its underlying nature. Deep learning methods have the potential to accurately identify images containing substructure, and differentiate WIMP particle dark matter from other well-motivated models, including axions and axion-like particles, warm dark matter etc.
17+
Gravitational lensing data is often collected at low resolution due to the limitations of the instruments or observing conditions. Image super-resolution techniques can be used to enhance the resolution of these images with machine learning, allowing for more precise measurements of the lensing effects and a better understanding of the distribution of matter in the lensing system. This can improve our understanding of the mass distribution of the lensing galaxy and its environment, as well as the properties of the background source being lensed.
18+
19+
This project will focus on the development of deep learning-based image super-resolution techniques such as conditional diffusion models to enhance the resolution of gravitational lensing data. Furthermore, we will also investigate leveraging the super-resolution models for other strong lensing tasks such as regression and lens finding.
20+
21+
## Duration
22+
23+
Total project length: 175/350 hours.
24+
25+
## Difficulty level
26+
27+
Intermediate/Advanced
28+
29+
## Task ideas
30+
* Expand the DeepLense functionality with superresolution algorithms suitable for computer vision tasks applicable to strong gravitational lensing data.
31+
32+
## Expected results
33+
* Develop a superresolution model for DeepLense training and inference.
34+
35+
## Requirements
36+
Python, PyTorch and relevant past experience in Machine Learning.
37+
38+
<!-- ## Test
39+
Please use this [link](https://docs.google.com/document/d/1P8SC5bh7twrWta4MD8jpn5kwEmoIAYlDd39iVWRkkq8/edit?usp=sharing) to access the test for this project. -->
40+
41+
## Mentors
42+
* [Michael Toomey](mailto:[email protected]) (Massachusetts Institute of Technology)
43+
* [Sergei Gleyzer](mailto:[email protected]) (University of Alabama)
44+
* [Pranath Reddy](mailto:[email protected]) (University of Florida)
45+
* [Anna Parul](mailto:[email protected]) (University of Alabama)
46+
* [Saranga Mahanta](mailto:[email protected]) (Institut Polytechnique de Paris)
47+
* [Kartik Sachdev](mailto:[email protected]) (RWTH Aachen)
48+
49+
50+
Please DO NOT contact mentors directly by email. Instead, please email [[email protected]](mailto:[email protected]) with Project Title and include **your CV** and **test results**. The relevant mentors will then get in touch with you.
51+
52+
53+
## Links
54+
* [Paper 1](https://arxiv.org/abs/2008.12731)
55+
* [Paper 2](https://arxiv.org/abs/1909.07346)
56+
* [Paper 3](https://arxiv.org/abs/2112.12121)
57+

Diff for: _gsocproposals/2025/proposal_DEEPLENSE2.md

+53
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
---
2+
title: Expanding Strong Gravitational Lensing Simulations
3+
layout: gsoc_proposal
4+
project: DEEPLENSE
5+
project size: 175hr
6+
year: 2025
7+
organization:
8+
- Alabama
9+
- Brown
10+
- Washington
11+
- Paris
12+
- RWTH
13+
---
14+
15+
## Description
16+
Strong gravitational lensing is a promising probe of the substructure of dark matter to better understand its underlying nature. Deep learning methods have the potential to accurately identify images containing substructure, and differentiate [WIMP](https://en.wikipedia.org/wiki/Weakly_interacting_massive_particles) particle dark matter from other well motivated models, including vortex substructure of dark matter condensates and superfluids.
17+
18+
This project will focus on further development of the simulations used in the DeepLense pipeline which utilize the package __lenstronomy__. This will include working to increase the fidelity of the simulations and also working to expand the range of dark matter models considered for simulation. The project will also include helping to improve and facilitate the creation of data for applications with various ML approaches in this project.
19+
20+
## Duration
21+
Total project length: 175/350 hours.
22+
23+
## Difficulty level
24+
Intermediate/Advanced
25+
26+
## Task ideas
27+
* Help to streamline the simulation of strong lensing data sets.
28+
* Create simulated strong lensing data sets that are designed to meet the various needs of other ML projects.
29+
* Modify simulations to include new dark matter physics and as well as increase simulation fidelity by accounting for other observational systematics.
30+
31+
## Expected results
32+
Update and streamline code used by project and generate several simulated lensing data sets.
33+
34+
## Requirements
35+
Python and some previous experience with physics or astronomy.
36+
37+
<!-- ## Test
38+
Please use this [link](https://docs.google.com/document/d/1P8SC5bh7twrWta4MD8jpn5kwEmoIAYlDd39iVWRkkq8/edit?usp=sharing) to access the test for this project. -->
39+
40+
## Mentors
41+
* [Michael Toomey](mailto:[email protected]) (Massachusetts Institute of Technology)
42+
* [Stephon Alexander](mailto:[email protected]) (Brown University)
43+
* [Sergei Gleyzer](mailto:[email protected]) (University of Alabama)
44+
* [Brandon Ames](mailto:[email protected]) (University of Alabama)
45+
46+
Please DO NOT contact mentors directly by email. Instead, please email [[email protected]](mailto:[email protected]) with Project Title and include **your CV** and **test results**. The relevant mentors will then get in touch with you.
47+
48+
49+
## Links
50+
* [Paper 1](https://arxiv.org/abs/2008.12731)
51+
* [Paper 2](https://arxiv.org/abs/1909.07346)
52+
* [Paper 3](https://arxiv.org/abs/2112.12121)
53+

Diff for: _gsocproposals/2025/proposal_DEEPLENSE3.md

+59
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
---
2+
title: Search for Strong Gravitational Lenses
3+
layout: gsoc_proposal
4+
project: DEEPLENSE
5+
project size: 175hr
6+
year: 2025
7+
organization:
8+
- Alabama
9+
- Brown
10+
- BITS Pilani Hyderabad
11+
- Paris
12+
- RWTH
13+
---
14+
15+
## Description
16+
17+
Strong gravitational lensing is a powerful tool in exploring various astrophysical questions, including probing the substructure in dark matter haloes of the lensing galaxies. However one of the main limitations of such analysis is the relatively small number of known lens candidates and confirmed lens systems.
18+
19+
Recent works have shown the potential of CNNs in the task of lens finding — classification of images obtained from the telescopes into lensed and non-lensed systems. Since the number of real lenses is insufficient for training a machine learning algorithm, training datasets heavily rely on simulations. However it has been noticed that CNNs perform worse on lens images obtained with the instrument from the one that simulations were tailored to reproduce (for example, different surveys use different color filters and have different resolution).
20+
21+
The goal of this project is to investigate the prospects of using domain adaptation techniques to bridge the gap between simulated data used for training and real images from different surveys (such as [HSC-SSP](https://hsc.mtk.nao.ac.jp/ssp/), [HST](https://science.nasa.gov/mission/hubble/), [DES](https://www.darkenergysurvey.org), [JWST](https://webb.nasa.gov), and future missions) and explore which type of lenses has a higher risk of being lost during the automated searches.
22+
23+
## Duration
24+
25+
Total project length: 175/350 hours.
26+
27+
## Difficulty level
28+
29+
Intermediate
30+
31+
## Task ideas
32+
* Compare the performance of supervised neural networks with various architectures (convolutional, residual, equivariant) on the simulated dataset.
33+
* For the best model explore the use of domain adaptation techniques in application to real data
34+
35+
## Expected results
36+
* Develop a model for lens finding and apply it to real observational data.
37+
38+
## Requirements
39+
Python, PyTorch and relevant past experience in Machine Learning.
40+
41+
<!-- ## Test
42+
Please use this [link](https://docs.google.com/document/d/1P8SC5bh7twrWta4MD8jpn5kwEmoIAYlDd39iVWRkkq8/edit?usp=sharing) to access the test for this project. -->
43+
44+
## Mentors
45+
* [Michael Toomey](mailto:[email protected]) (Massachusetts Institute of Technology)
46+
* [Anna Parul](mailto:[email protected]) (University of Alabama)
47+
* [Sergei Gleyzer](mailto:[email protected]) (University of Alabama)
48+
* [Pranath Reddy](mailto:[email protected]) (BITS Pilani Hyderabad)
49+
* [Saranga Mahanta](mailto:[email protected]) (Institut Polytechnique de Paris)
50+
* [Kartik Sachdev](mailto:[email protected]) (RWTH Aachen)
51+
52+
53+
Please DO NOT contact mentors directly by email. Instead, please email [[email protected]](mailto:[email protected]) with Project Title and include **your CV** and **test results**. The relevant mentors will then get in touch with you.
54+
55+
56+
## Links
57+
* [Paper 1](https://arxiv.org/abs/2008.12731)
58+
* [Paper 2](https://arxiv.org/abs/1909.07346)
59+
* [Paper 3](https://arxiv.org/abs/2112.12121)

0 commit comments

Comments
 (0)