-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathindex.html
336 lines (273 loc) · 18.5 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
<!DOCTYPE HTML>
<html lang="en"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Sadbhawna</title>
<meta name="author" content="Sadbhawna">
<meta name="viewport" content="width=device-width, initial-scale=1">
<style>
#parent{
width: 10%;
margin: 0 auto;
}
</style>
<link rel="stylesheet" type="text/css" href="stylesheet.css">
<link rel="icon" type="image/png" href="images/sadbhawna.png">
<meta name="google-site-verification" content="google-site-verification=tPEfPlEVnVvt6TFG9wuwRVDjNiN_hKAo35FFL8qYm_k" />
</head>
<body>
<table style="width:100%;max-width:800px;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr style="padding:0px">
<td style="padding:0px">
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr style="padding:0px">
<td style="padding:2.5%;width:63%;vertical-align:middle">
<p style="text-align:center">
<name>Sadbhawna</name>
</p>
<p style="text-align:justify;"> I'm working as an Assistant Professor in the CSE Department and AIDE Department at <a href="https://www.mnit.ac.in/">MNIT Jaipur</a> since January 2024. Before that I was an Institute Post Doctoral Fellow at <a href="https://www.iitm.ac.in/">IIT Madras</a>. I recieved my Ph.D. from the CSE Department at <a href="https://iitjammu.ac.in/">IIT Jammu</a> advised by <a href="https://sites.google.com/view/vinitjakhetiya/home/">Dr. Vinit Jakhetiya</a> in January 2023.
I primarily work in the area of Image Processing, Computer Vison and Machine Learning. My main focus includes analyzing and enhancing the perceptual quality of super-resolved images/videos, 3D synthesized views.
</p>
<p style="text-align:justify;">
I completed my M.Tech. from <a href="http://sliet.ac.in/">SLIET Longowal</a> in 2018 and B.Tech from <a href="https://www.himtu.ac.in/">HPTU</a> in 2016.
</p>
<p style="text-align:justify; color:red"> ***I am seeking highly motivated PhD scholars to join my research group. Contact me if interested.*** </p>
<p style="text-align:center">
<a href="[email protected]">Email</a>  / 
<a href="data/cv_sady.pdf">CV</a>  / 
<a href="https://scholar.google.com/citations?hl=en&user=9GKhxRcAAAAJ">Google Scholar</a>   / 
<a href="https://github.com/sadbhawnathakur">Github</a>  / 
<a href="https://www.linkedin.com/in/sadbhawna-thakur-102623188/">LinkedIn</a>  / 
<a href="https://twitter.com/SadbhawnaThakur">Twitter</a>  / 
</p>
</td>
<td style="padding:2.5%;width:40%;max-width:40%">
<a href="images/sadbhawna.jpg"><img style="width:100%;max-width:100%" alt="profile photo" src="images/sadbhawna.jpg" class="hoverZoomLink"></a>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:20px;width:100%;vertical-align:middle">
<heading>News</heading>
<p> 03/08/2024- I recieved Young Women Scientist Award of computational resourses worth ₹ 5 Lacs from IIT(BHU). </p>
<p> 19/02/2024- One paper has been accepted in IEEE Transactions on Multimedia. </p>
<p> 11/01/2024- I have joined MNIT Jaipur as an Assistant Professor. </p>
<p> 16/04/2023- I have joined IIT Madras as an Institute Post Doctoral Fellow. </p>
<p> 05/04/2023- One paper has been accepted in CVPR Workshops 2023 (NTIRE: New trends in image restoration and enhancement). </p>
<p> 07/01/2023- I successfully defended my Ph.D. Thesis. </p>
<p> 04/09/2022- One paper has been accepted in IEEE Transactions on Multimedia. </p>
<p> 15/03/2022- Started working with <a href="https://spjaiswal.github.io/">Dr. Sunil Jaiswal,</a> Head R&D, <a href= "https://www.k-lens.de/">K|Lens GmbH</a></p>
<p> 25/12/2021- Selected as PIEF candidate by IGSTC for a 6 months internship in Germany. <a href="https://www.igstc.org/images/announcements/164016139820211222.pdf">(Results Link)</a> </p>
<p> 30/11/2021- One paper has been accepted in AAAI Student Abstract Program. </p>
<p> 04/10/2021- Two papers has been accepted in IEEE Transactions on Image Processing. </p>
<p> 15/06/2021- Our team has been awarded 600 USD sponsored by Google in ICASSP 2021 SPGC Grand Challenge on being Runner Ups.</p>
<br>
<heading>Teaching</heading>
<p> Spring 2024- CST310 Computer Graphics <a href="https://sadbhawnathakur.github.io/computer_graphics.html">Course Website</a> </p>
<p> Spring 2024- CS Computer Organization and Architecture <a href="https://sadbhawnathakur.github.io/computer_organization_and__architecture.html">Course Website</a> </p>
<p> Fall 2024- CST437 Neural Networks <a href="https://sadbhawnathakur.github.io/neural_networks.html">Course Website</a> </p>
<p> Fall 2024- 22CST101 Programming with Python <a href="https://sadbhawnathakur.github.io/python_programming.html">Course Website</a> </p>
<br> <br>
<heading>Publications</heading>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr onmouseout="hypernerf_stop()" onmouseover="hypernerf_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<div class="one">
<div class="two" id='hypernerf_image'>
<img src='images/cvpr.png' width="160">
</div>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://arxiv.org/abs/2305.02660">
<papertitle>Expanding Synthetic Real-World Degradations for Blind Video Super Resolution</papertitle>
</a>
<br>
<a>Mehran Jeelani*</a>,
<strong>Sadbhawna Thakur*</strong>,
<a href="https://people.mpi-inf.mpg.de/~ncheema/">Noshaba Cheema</a>,
<a href="https://www.researchgate.net/profile/Klaus-Illgner">Klaus Illgner</a>,
<a href="https://graphics.cg.uni-saarland.de/people/slusallek.html">Philipp Slusallek</a>,
<a href="https://spjaiswal.github.io/">Sunil Jaiswal</a>,
<br>
<em>IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2023</em>
<br>
<a href="https://arxiv.org/abs/2305.02660">Paper</a>
<p style="text-align:justify;"> This work shows how varied random degradations can contribute to learning an effective VSR model, especially for
real-world video artifacts.</p>
</td>
</tr>
<tr onmouseout="hypernerf_stop()" onmouseover="hypernerf_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<div class="one">
<div class="two" id='hypernerf_image'>
<img src='images/tmm.png' width="160">
</div>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://ieeexplore.ieee.org/abstract/document/9891833">
<papertitle>Context Region Identification based Quality Assessment of 3D Synthesized Views</papertitle>
</a>
<br>
<strong>Sadbhawna Thakur</strong>,
<a href="https://sites.google.com/view/vinitjakhetiya/home/">Vinit jakhetiya</a>,
<a href="https://sites.google.com/view/badrisubudhi/home?authuser=0">Badri N. Subudhi</a>,
<a href="https://spjaiswal.github.io/">Sunil Jaiswal</a>,
<a href="https://web.xidian.edu.cn/ldli/en/index.html">Leida Li</a>,
<a href="https://personal.ntu.edu.sg/wslin/Home.html">Weisi Lin</a>,
<br>
<em>IEEE Transactions on Multimedia, 2022</em>
<br>
<a href="https://ieeexplore.ieee.org/abstract/document/9891833">Paper</a>
<p style="text-align:justify;"> In this work, we propose a new and efficient quality assessment algorithm based upon the variation in the depth of 3D synthesized and reference views.</p>
</td>
</tr>
<tr onmouseout="hypernerf_stop()" onmouseover="hypernerf_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<div class="one">
<div class="two" id='hypernerf_image'>
<img src='images/aaai.png' width="160">
</div>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://ojs.aaai.org/index.php/AAAI/article/view/21656">
<papertitle>Do we need a new large-scale quality assessment database for Generative Inpainting based 3D View Synthesis ? (Student Abstract)</papertitle>
</a>
<br>
<strong>Sadbhawna Thakur</strong>,
<a href="https://sites.google.com/view/vinitjakhetiya/home/">Vinit jakhetiya</a>,
<a href="https://sites.google.com/view/badrisubudhi/home?authuser=0">Badri N. Subudhi</a>,
<a>Harshit Shakya</a>,
<a href="https://scholar.google.com/citations?user=oQ28WW0AAAAJ&hl=en">Deebha Mumtaz</a><br>
<br>
<em>AAAI 2022</em>
<br>
<a href="https://ojs.aaai.org/index.php/AAAI/article/view/21656">Paper</a>
<p style="text-align:justify;"> We created a test dataset to analyze the need for a new perceptual metric for 3D synthesized views.</p>
</td>
</tr>
<tr onmouseout="hypernerf_stop()" onmouseover="hypernerf_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<div class="one">
<div class="two" id='hypernerf_image'>
<img src='images/tip2.png' width="160">
</div>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="">
<papertitle>Shift Compensation and Cosine Similarity based Quality Assessment of 3D-Synthesized Images</papertitle>
</a>
<br>
<strong>Sadbhawna Thakur</strong>,
<a href="https://sites.google.com/view/vinitjakhetiya/home/">Vinit jakhetiya</a>,
<a href="https://github.com/shubhamchaudhary2015/ct_covid19_cap_cnn">Shubham Chaudhary</a>,
<a href="https://sites.google.com/view/badrisubudhi/home?authuser=0">Badri N. Subudhi</a>,
<a href="https://sharathg.cis.upenn.edu/">Sharadh Chandra Guntuku</a>,
<a href="https://personal.ntu.edu.sg/wslin/Home.html">Weisi Lin</a>,<br>
<br>
<em>IEEE Transactions on Image Processing</em>, 2021
<br>
<a href="https://github.com/sadbhawnathakur/3D-Image-Quality-Assessment">project page</a>
/
<a href="https://ieeexplore.ieee.org/document/9714223">Paper</a>
<p style="text-align:justify;"> In this work, we extract the perceptually important deep features from the pre-trained VGG-16 architectures on the Laplacian pyramid to predict the quality of 3D synthesized views.</p>
</td>
</tr>
<tr onmouseout="hypernerf_stop()" onmouseover="hypernerf_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<div class="one">
<div class="two" id='hypernerf_image'>
<img src='images/tip1.png' width="160">
</div>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="">
<papertitle>Stretching Artifacts Identification for Quality Assessment of 3D-Synthesized Views</papertitle>
</a>
<br>
<strong>Sadbhawna Thakur</strong>,
<a href="https://sites.google.com/view/vinitjakhetiya/home/">Vinit jakhetiya</a>,
<a href="https://sharathg.cis.upenn.edu/">Sharadh Chandra Guntuku</a>,
<a href="https://scholar.google.com/citations?user=oQ28WW0AAAAJ&hl=en">Deebha Mumtaz</a>,
<a href="https://sites.google.com/view/badrisubudhi/home?authuser=0">Badri N. Subudhi</a>,<br>
<br>
<em>IEEE Transactions on Image Processing</em>, 2021
<br>
<a href="https://github.com/sadbhawnathakur/3D-Image-Quality-Assessment">project page</a>
/
<a href="https://ieeexplore.ieee.org/document/9697977">Paper</a>
<p style="text-align:justify;"> We proposed a Convolutional Neural Network (CNN) based algorithm that identifies the blocks with stretching artifacts and further incorporates the number of blocks with the stretching artifacts to predict the quality of 3D-synthesized views. </p>
</td>
</tr>
<tr onmouseout="hypernerf_stop()" onmouseover="hypernerf_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<div class="one">
<div class="two" id='hypernerf_image'>
<img src='images/covid.png' width="160">
</div>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://ieeexplore.ieee.org/abstract/document/9414007">
<papertitle>Detecting COVID-19 and Community Acquired Pneumonia using Chest CT Scan Images with Deep Learning</papertitle>
</a>
<br>
<a href="https://github.com/shubhamchaudhary2015/ct_covid19_cap_cnn">Shubham Chaudhary</a>,
<strong>Sadbhawna Thakur</strong>,
<a href="https://sites.google.com/view/vinitjakhetiya/home/">Vinit jakhetiya</a>,
<a href="https://sites.google.com/view/badrisubudhi/home?authuser=0">Badri N. Subudhi</a>,<br>
<a href="https://scholar.google.co.in/citations?user=2Zdi2D4AAAAJ&hl=en">Ujjwal Baid</a>,
<a href="https://sharathg.cis.upenn.edu/">Sharadh Chandra Guntuku</a>
<br>
<em>ICASSP</em>, 2021
<br>
<a href="https://github.com/shubhamchaudhary2015/ct_covid19_cap_cnn">project page</a>
/
<a href="https://ieeexplore.ieee.org/abstract/document/9414007">Paper</a>
<p style="text-align:justify;"> We proposed a two-stage Convolutional Neural Network (CNN) based classification framework for detecting COVID-19 and Community Acquired Pneumonia (CAP) using the chest Computed Tomography (CT) scan images</p>
</td>
</tr>
<tr onmouseout="hypernerf_stop()" onmouseover="hypernerf_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<div class="one">
<div class="two" id='hypernerf_image'>
<img src='images/mmsp.png' width="160">
</div>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://ieeexplore.ieee.org/abstract/document/9287088">
<papertitle>Distortion Specific Contrast Based No-Reference Quality Assessment of DIBR-Synthesized Views</papertitle>
</a>
<br>
<strong>Sadbhawna Thakur</strong>,
<a href="https://sites.google.com/view/vinitjakhetiya/home/">Vinit jakhetiya</a>,
<a href="https://scholar.google.com/citations?user=oQ28WW0AAAAJ&hl=en">Deebha Mumtaz</a>,
<a href="https://spjaiswal.github.io/">Sunil Jaiswal</a>
<br>
<em>MMSP 2020</em>, 2021
<br>
<a href="">project page</a>
/
<a href="https://ieeexplore.ieee.org/abstract/document/9287088">Paper</a>
<p style="text-align:justify;"> We proposed a perceptual metric for 3D views based on the difference in propoerties of synthetic and natural images.</p>
</td>
</tr>
<br>
</tbody></table>
<div id="parent">
<script type="text/javascript" id="clstr_globe" src="//clustrmaps.com/globe.js?d=PAW_gOsDuJXHxCkTVgqM7ohN5BLMqFb2T6ai6IlA94k"></script>
</div>
<br>
<p style="text-align:center;font-size:small;">
</p>
<p style="text-align:center;font-size:small;">
Last updated on January 11, 2024 | Thanks <a href="https://jonbarron.info/"> Dr. Jonathan T. Barron</a> for this awesome template.
</p>
</td>
</tr>
</tbody></table>
</td>
</tr>
</table>
</body>
</html>