Sep, 2025: Our team has developed a framework named Evaloop to fairly assess LLMs' robustness for programming. Based on this, we are actively mainteining a leaderboard with the results on more than 100 LLMs. Check it out at https://evalooop.github.io!
September, 2025: One work titled How Quantization Impacts Privacy Risk on LLMs for Code? is accepted by Main Track of 2nd ACM International Conference on AI-powered Software (AIware 2025). Congrats Nazmul and Hua!
July, 2025: One work titled Learning From the Best: What Makes Popular Hugging Face Models? A Registered Report is accepted by Registered Report Track of International Conference on Software Maintenance and Evolution (ICSME). Congrats Yinan!