As someone who’s in interview panels and editorial boards I’ve never seen an example of what you describe (though I’m sure that does happen more frequently now), but see/read a lot of “Justify the use of AI. What does it bring to the table for this problem?” and rejections if that question cannot be answered, and that’s reassuring given what science is about. Similarly, you see funding opportunities in atmospheric science with a scope description that explicitly excludes AI, because it’s not useful for the problem the scheme addresses. (But there’s no question that there’s more funding for AI related projects - this usually results in people just adding a “bit of AI” on top of the planned research to fit the funding call). So the trend has an effect, but doesn’t take over everything.
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
AMS report on the impact of technology on the weather enterprise workforce | 27 | 907 | November 1, 2021 | |
Seminar by Damian Rouson on April 17 | 14 | 1324 | April 27, 2023 | |
PhD Opportunity (climate science, Python + Fortran/C) | 0 | 251 | December 31, 2024 | |
Milan's department seminar on May 23 | 3 | 692 | December 6, 2023 | |
Fortran at deRSE25 | 1 | 121 | March 7, 2025 |