-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
1175 lines (1026 loc) · 87.5 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="robots" content="noarchive">
<title>Fabrice Matulic homepage</title>
</style>
<link href="style.css" rel="stylesheet">
<link href="jquery/jquery-ui.css" rel="stylesheet">
<script src="jquery/jquery-2.1.1.min.js"></script>
<script src="jquery/jquery-ui.min.js"></script>
<script>
$(function() {
$( "#tabs" ).tabs();
});
</script>
</head>
<body>
<div class="shadow" style="background-color: #FFFFFF; margin-top: 0;">
<h1 style="text-align:center; padding-top: 20px;">Homepage of Fabrice Matulic</h1>
<table style="margin:20px 30px 20px 0;">
<tr>
<td><img src="images/me.gif" class="imgBorders" style="margin:0 20px 0 20px;" /></td>
<td style="font-size:1.2em">
<p>I am a Senior Researcher at <a href="https://www.preferred-networks.jp/en/">Preferred Networks Inc.</a>, Japan, and Assistant Professor at the University of Waterloo, Canada, in Human-Computer Interaction (HCI) and applied AI. <!-- The overarching theme of most of my early work is the exploration of novel but practical tools, interactions and paradigms leveraging the potential of modern devices to facilitate document engineering workflows in the office and beyond. My interests are mostly in interactive surfaces and pen computing, but recently, I have been exploring the space of embodied and remote interaction with large and ubiquitous displays.-->My main research interests are in interactive surfaces and pen computing, embodied and remote interaction with large displays, and extended reality (XR). Within those areas, I try to harness the power of AI/deep learning to create novel interactive experiences and techniques.
<!--as well as human-robot interaction (HRI).-->
</p><p>
I have conducted research in both industry and academia in several countries, including Canada (University of Waterloo), Germany (Technische Universität Dresden), the US (Microsoft Research, Redmond), Switzerland (ETH Zurich) and Japan (Ricoh and Preferred Networks, Tokyo). Prior to that I worked as a freelance developer in various startups in Germany during the first dot-com period.</p>
</td>
</tr>
</table>
<!-- Tabs -->
<div id="tabs">
<ul>
<li><a href="#tabs-1"><strong>Projects</strong></a></li>
<li><a href="#tabs-2"><strong>Publications</strong></a></li>
<li><a href="#tabs-3"><strong>Contact</strong></a></li>
</ul>
<div id="tabs-1"><table class="onepix">
<col width="64" /> <tbody>
<tr>
<td rowspan="10" colspan="1" align="center"><img style="width: 120px; height: 27px;" alt="" src="images/PFN_logo_res.png" /><br />
</td>
<td style="text-align: center;"><img style="width: 232px;" alt="" src="images/MultiviewCapture.jpg" /><br/><img style="width: 232px;" alt="" src="images/DataAugment.jpg" /><br/>
</td>
<td class="descr">
<div class="ptitle">HCI for Machine Learning</div><p>Preparing and preprocessing source data to train neural networks can require considerable manual labour and expertise. We create intuitive user interfaces and techniques to facilitate some of those tasks, including data labelling and augmentation.</p>
<div class="relpub">Related Publications</div>
<ul class="disc">
<li>
<p class="p">Wataru Kawabe, Taisuke Hashimoto, Fabrice Matulic, Takeo Igarashi, Keita Higuchi. Interactive Material Annotation on 3D Scanned Models leveraging Color-Material Correlation, <em>SIGGRAPH Asia 2023 TC</em>
(Paper: <a href="pubs/SIGGRAPHAsia2023TC.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>, video: <a target="_blank" href="https://www.youtube.com/watch?v=qwU-OiYsZ-g"><img alt="video" src="images/youtubeicon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
<li>
<p class="p">Kotaro Oomori, Wataru Kawabe, Fabrice Matulic, Takeo Igarashi, Keita Higuchi. Interactive 3D Annotation of Objects in Moving Videos from Sparse Multi-view Frames, <em>ISS 2023</em>
(Paper: <a href="pubs/ISS2023.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>, video: <a target="_blank" href="https://www.youtube.com/watch?v=PT-PElgtQcI"><img alt="video" src="images/youtubeicon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
<li>
<p class="p">Keita Higuchi, Taiyo Mizuhashi, Fabrice Matulic, Takeo Igarashi. Interactive Generation of Image Variations for Copy-Paste Data Augmentation, <em>CHI 2023 LBW</em>
(Extended Abstract: <a href="pubs/CHI2023LBW2.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>, video: <a target="_blank" href="https://www.youtube.com/watch?v=HklROgx4iNA"><img alt="video" src="images/youtubeicon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
</ul>
</td>
</tr>
<tr>
<td style="text-align: center;"><img style="width: 232px;" alt="" src="images/PhoneDexterity.png" /><br/>
</td>
<td class="descr">
<div class="ptitle">Dexterous Finger Gestures to Manipulate Mobile Phones</div><p>This research explores single-handed "dexterous gestures" for manipulating a mobile phone using fine motor skills of fingers. We consider four dexterous manipulations: shift, spin, rotate, and flip, which we analyse in three user studies. We provide design guidelines to map gestures to interactions and show how they can be used in applications.</p>
<div class="relpub">Related Publication</div>
<ul class="disc">
<li>
<p class="p">Yen-Ting Yeh, Fabrice Matulic and Daniel Vogel. Phone Sleight of Hand: Finger-Based Dexterous Gestures for Physical Interaction with Mobile Phones, <em>CHI 2023</em>
(Paper: <a href="pubs/CHI2023.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>, video: <a target="_blank" href="https://www.youtube.com/watch?v=U0kusO7hH1g"><img alt="video" src="images/youtubeicon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
</ul>
</td>
</tr>
<tr>
<td style="text-align: center;"><img style="width: 232px;" alt="" src="images/PenTouchMidair.png" /><br/><img style="width: 232px;" alt="" src="images/VRTerrainModelling.jpg" /><br/>
</td>
<td class="descr">
<div class="ptitle">Pen+Touch+Midair Hybrid Two-Hand Interaction in desktop VR</div><p>We explore a design space for hybrid bimanual pen and touch input extended to midair interaction in desktop-based virtual reality, specifically, asymmetric interaction patterns combining the pen with the other hand when interacting in the same “space” (either surface or midair), across both spaces, and with cross-space transitions (from surface to midair and vice versa). We concretely investigate those interactions and associated gestures with three testbed applications for 3D modelling, volumetric rendering, and terrain editing.</p>
<div class="relpub">Related Publications</div>
<ul class="disc">
<li>
<p class="p">Fabrice Matulic and Daniel Vogel. Pen+Touch+Midair: Cross-Space Hybrid Bimanual Interaction on Horizontal Surfaces in Virtual Reality, <em>GI 2023</em>
(Paper: <a href="pubs/GI2023.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>, video: <a target="_blank" href="https://youtu.be/iHgpMMFVtgg"><img alt="video" src="images/youtubeicon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
<li>
<p class="p">Fabrice Matulic and Daniel Vogel. Terrain Modelling with a Pen & Touch Tablet and Mid-Air Gestures in Virtual Reality, <em>CHI 2022 LBW</em>
(Extended Abstract: <a href="pubs/CHI2022LBW.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>, video: <a target="_blank" href="https://www.youtube.com/watch?v=X32PX08XXYQ"><img alt="video" src="images/youtubeicon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
</ul>
</td>
</tr>
<tr>
<td style="text-align: center;"><img style="width: 232px;" alt="" src="images/Typealike.png" /><br />
</td>
<td class="descr">
<div class="ptitle">Typealike: Near-Keyboard Hand Postures for Expanded Laptop Interaction</div><p>Typealike is a style of hand postures close to natural typing poses that allow users to quickly trigger commands on a laptop computer. The hand postures are detected using deep learning classification of images captured by the laptop's webcam reflected through a downward-facing mirror.</p>
<div class="relpub">Related Publication</div>
<ul class="disc">
<li>
<p class="p">Nalin Chhibber, Hemant Bhaskar Surale, Fabrice Matulic and Daniel Vogel. Typealike: Near-Keyboard Hand Postures for Expanded Laptop Interaction, <em>ISS 2021</em>
(Paper: <a href="pubs/ISS2021a.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>, video: <a target="_blank" href="https://www.youtube.com/watch?v=QyfMicxQH84"><img alt="video" src="images/youtubeicon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
</ul>
</td>
</tr>
<tr>
<td style="text-align: center;"><img style="width: 232px;" alt="" src="images/Phonetroller.jpg" /><br/><br/>
<img style="width: 232px;" alt="" src="images/PhoneVR2.jpg" /><br/>
</td>
<td class="descr">
<div class="ptitle">Mobile Phones as VR Controllers with Above-Screen Mirrors to Capture and Track Hands for Visualisation in VR</div><p>Smartphones can be used as VR controllers but since the user cannot see the phone or their hands when wearing the headset, precise touch input is difficult. We address this problem by attaching one or two mirrors above the phone screen such that the front-facing camera captures the hand through reflection. With a single mirror the camera feed can be shown directly as a texture on the screen of the phone model in VR to help the user aim precisely with their fingers. With two mirrors capturing the hand from two different angles, we can track the 3D position of fingertips using deep learning.</p>
<div class="relpub">Related Publications</div>
<ul class="disc">
<li>
<p class="p">Fabrice Matulic, Aditya Ganeshan, Hiroshi Fujiwara and Daniel Vogel. Phonetroller: Visual Representations of Fingers for Precise Touch Input with Mobile Phones in VR, <em>CHI 2021</em>
(Paper: <a href="pubs/CHI2021.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>, video: <a target="_blank" href="https://www.youtube.com/watch?v=fMeDbZRSVAE"><img alt="video" src="images/youtubeicon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
<li>
<p class="p">Fabrice Matulic, Taiga Kashima, Deniz Beker, Daichi Suzuo, Hiroshi Fujiwara and Daniel Vogel. Above-Screen Fingertip Tracking with a Phone in Virtual Reality, <em>CHI 2023 LBW</em>
(Extended Abstract: <a href="pubs/CHI2023LBW.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>, video: <a target="_blank" href="https://www.youtube.com/watch?v=hNPKkb-ml6k"><img alt="video" src="images/youtubeicon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
<li>
<p class="p">Fabrice Matulic, Taiga Kashima, Deniz Beker, Daichi Suzuo, Hiroshi Fujiwara and Daniel Vogel. Above-Screen Fingertip Tracking and Hand Representation for Precise Touch Input with a Phone in Virtual Reality, <em>GI 2024</em>
(Paper: <a href="pubs/GI2024.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>, video: <a target="_blank" href="https://www.youtube.com/watch?v=puchckJFSCY"><img alt="video" src="images/youtubeicon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
</ul>
</td>
</tr>
<tr>
<td style="text-align: center;"><img style="width: 232px;" alt="" src="images/PenSight.jpg" /><br />
</td>
<td class="descr">
<div class="ptitle">PenSight: Enhancing Pen Interaction via a Pen-Top Camera</div><p>PenSight is a novel concept to enhance pen interaction on tablets using a fisheye-lens camera attached to the top of the pen and facing downwards. Thus, the camera can "see" the user's hands and the surrounding environment. Using deep learning, we can detect different hand postures and tablet grips for quick action triggers and capturing off-tablet content such as surrounding documents.</p>
<div class="relpub">Related Publications</div>
<ul class="disc">
<li>
<p class="p">Fabrice Matulic, Riku Arakawa, Brian Vogel and Daniel Vogel. PenSight: Enhanced Interaction with a Pen-Top Camera, <em>CHI 2020</em>
(Paper: <a href="pubs/CHI2020.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>, video: <a target="_blank" href="https://www.youtube.com/watch?v=x4cobX5RTc8"><img alt="video" src="images/youtubeicon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
<li>
<p class="p">Fabrice Matulic and Daniel Vogel. Deep Learning-Based Hand Posture Recognition for Pen Interaction Enhancement, <em>Artificial Intelligence for Human Computer Interaction: A Modern Approach, Springer HCIS 2021</em>
(Book chapter: <a href="https://link.springer.com/chapter/10.1007/978-3-030-82681-9_7"><img alt="pdf" src="images/Springericon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
</ul>
</td>
</tr>
<tr>
<td style="text-align: center;"><img style="width: 232px;" alt="" src="images/PenEMGIcon.jpg" /><br />
</td>
<td class="descr">
<div class="ptitle">Elicitation of Alternative Pen-Holding Postures for Quick Action Triggers with Suitability for EMG Armband Detection</div><p>In this project we study what alternative ways of gripping a digital pen people might choose to trigger actions and shortcuts in applications (e.g. while holding the pen, extend the pinkie to invoke a menu). We also investigate how well we can recognise these different pen-holding postures using data collected from an EMG armband and deep learning.</p>
<div class="relpub">Related Publications</div>
<ul class="disc">
<li>
<p class="p">Fabrice Matulic, Brian Vogel, Naoki Kimura and Daniel Vogel. Eliciting Pen-Holding Postures for General Input with Suitability for EMG Armband Detection, <em>ISS 2019</em>
(Paper: <a href="pubs/ISS2019.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>, video: <a target="_blank" href="https://www.youtube.com/watch?v=yNXyQYevaBY"><img alt="video" src="images/youtubeicon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
<li>
<p class="p">Fabrice Matulic and Daniel Vogel. Deep Learning-Based Hand Posture Recognition for Pen Interaction Enhancement, <em>Artificial Intelligence for Human Computer Interaction: A Modern Approach, Springer HCIS 2021</em>
(Book chapter: <a href="https://link.springer.com/chapter/10.1007/978-3-030-82681-9_7"><img alt="pdf" src="images/Springericon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
</ul>
</td>
</tr>
<tr>
<td style="text-align: center;"><img style="width: 232px;" alt="" src="images/HayateGesture.jpg" /><br />
</td>
<td class="descr">
<div class="ptitle">Human-Robot Interaction for Personal Robots</div><p>Smart domestic robots are poised to revolutionise the way household chores and everyday tasks are carried out in the home of the future. Thanks to the recent boom of deep learning and "artificial intelligence", machines are able to autonomously perform increasingly complex tasks. But no matter how smart these robots may be or become, humans still need to engage with them and it is paramount that such interactions occur smoothly and safely. Our research efforts in human-robot interaction aim to not only better support end users (customers) when operating robots in their home, but also facilitate the programming and training of these machines by engineers, technicians and developers.</p>
<div class="relpub">Related Projects and Publications</div>
<ul class="disc">
<li>
<p class="p">Naoya Yoshimura, Hironori Yoshida, Fabrice Matulic and Takeo Igarashi. Extending Discrete Verbal Commands with Continuous Speech for Flexible Robot Control, <em>CHI EA 2019</em>
(Extended Abstract: <a href="pubs/CHI2019LBW.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>, video: <a target="_blank" href="https://youtu.be/0dCXx-0sGUY"><img alt="video" src="images/youtubeicon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
<li>
<p class="p">Fabrice Matulic, Yuta Kikuchi and Jason Naradowsky. Enabling Customer-Driven Learning and Customisation Processes for ML-Based Domestic Robots, <em>HCML Perspectives Workshop at CHI 2019</em>
(Position paper: <a href="pubs/CHI2019HCMLWorkshop.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
<li>
<p class="p">Autonomous Tidying-up Robot System, <em>CEATEC JAPAN 2018</em>
(<a target="_blank" href="https://projects.preferred.jp/tidying-up-robot/en/">Project page</a>)</p>
</li>
</ul>
</td>
</tr>
<tr>
<td style="text-align: center;"><img style="width: 232px;" alt="" src="images/ColourAIze.jpg" /><br />
</td>
<td class="descr">
<div class="ptitle">ColourAIze: AI-Driven Colourisation of Paper Drawings with Interactive Projection System</div><p>ColourAIze is an interactive system that analyses black and white drawings on paper, automatically determines realistic colour fills using AI and projects those colours onto the paper within the line art. In addition to selecting between multiple colouring styles, the user can specify local colour preferences to the AI via simple stylus strokes in desired areas of the drawing. This allows users to immediately and directly view potential colour fills for paper sketches or published black and white artwork such as comics.</p>
<div class="relpub">Related Publication</div>
<ul class="disc">
<li>
<p class="p">Fabrice Matulic. ColourAIze: AI-Driven Colourisation of Paper Drawings with Interactive Projection System, <em>ISS 2018</em>
(Paper: <a href="pubs/ISS2018.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>, video: <a target="_blank" href="https://www.youtube.com/watch?v=DYZzhK0ywiQ"><img alt="video" src="images/youtubeicon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
</ul>
</td>
</tr>
<tr>
<td style="text-align: center;"><img style="width: 232px;" alt="" src="images/UnimanualPT.jpg" /><br />
</td>
<td class="descr">
<div class="ptitle">Single-Hand Pen and Touch Input Using Variations of Pen-Holding Grips</div><p>This work investigates the use of different pen-holding grips while writing and drawing on a tablet to trigger various actions, including changing the pen function (e.g. to select, scroll, search) and calling in-place menus. The postures are recognised when the hand contacts the surface using a deep convolutional neural network applied on the raw touch input data (the capactitive image of the tablet). The feasibility of this approach is confirmed by two user evaluations.</p>
<div class="relpub">Related Publications</div>
<ul class="disc">
<li>
<p class="p">Drini Cami, Fabrice Matulic, Richard G. Calland, Brian Vogel, Daniel Vogel, Unimanual Pen+Touch Input Using Variations of Precision Grip Postures, <em>UIST 2018</em>
(Paper: <a href="pubs/UIST2018.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>, video: <a target="_blank" href="https://youtu.be/RqpRRSvNbAM"><img alt="video" src="images/youtubeicon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
<li>
<p class="p">Fabrice Matulic and Daniel Vogel. Deep Learning-Based Hand Posture Recognition for Pen Interaction Enhancement, <em>Artificial Intelligence for Human Computer Interaction: A Modern Approach, Springer HCIS 2021</em>
(Book chapter: <a href="https://link.springer.com/chapter/10.1007/978-3-030-82681-9_7"><img alt="pdf" src="images/Springericon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
</ul>
</td>
</tr>
<tr>
<td rowspan="4" colspan="1" align="center">
<img style="width: 120px; height: 22px;" alt="" src="images/wathci.png" /><br />
<img style="width: 120px; height: 29px;" alt="" src="images/UWaterloo.png" />
<br />
</td>
<td style="text-align: center;"><img style="width: 232px;" alt="" src="images/HybridPointing.png" /><br />
</td>
<td class="descr">
<div class="ptitle">HybridPointing for Touch: Switching Between Absolute and Relative Pointing on Large Touch Screens</div><p>CursorTap is a multitouch selection technique to efficiently reach both near and distant targets on large wall displays using hybrid absolute and relative pointing. The user switches to relative mode with three-fingers of one hand while using the other hand to control a cursor, similar to a touchpad.</p>
<div class="relpub">Related Publication</div>
<ul class="disc">
<li>
<p class="p">Terence Dickson, Rina R. Wehbe, Fabrice Matulic, Daniel Vogel, HybridPointing for Touch: Switching Between Absolute and Relative Pointing on Large Touch Screens, <em>ISS 2021</em>
(Paper: <a href="pubs/ISS2021b.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>, video: <a target="_blank" href="https://www.youtube.com/watch?v=drMl3CTLrP0"><img alt="video" src="images/youtubeicon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
</ul>
</td>
</tr>
<tr>
<td style="text-align: center;"><img style="width: 232px;" alt="" src="images/ModeSwitchVR.jpg" /><br />
</td>
<td class="descr">
<div class="ptitle">Barehand Mid-air Mode-Switching Techniques in VR</div><p>This work presents an empirical comparison of bare hand, mid-air mode-switching techniques suitable for virtual reality (VR). Specifically, we look at what kind of hand/finger postures can efficiently change the type of operation performed by the same action of the dominant hand (e.g. from moving a virtual object with a finger translation to scaling or copying it). We consider common finger and hand motions such as pinching fingers, turning and waving the hand(s)) as switching techniques. Our results provide guidance to researchers and practitioners when choosing or designing bare hand, mid-air mode-switching techniques in VR.</p>
<div class="relpub">Related Publication</div>
<ul class="disc">
<li>
<p class="p">Hemant Bhaskar Surale, Fabrice Matulic, Daniel Vogel, Experimental Analysis of Barehand Mid-air Mode-Switching Techniques in Virtual Reality, <em>CHI 2019</em>
(Paper: <a href="pubs/CHI2019.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>, video: <a target="_blank" href="https://www.youtube.com/watch?v=2TKuZsPFjoI"><img alt="video" src="images/youtubeicon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
</ul>
</td>
</tr>
<tr>
<td style="text-align: center;"><img style="width: 232px;" alt="" src="images/Multiray.jpg" /><br />
</td>
<td class="descr">
<div class="ptitle">Multiray: Multi-Finger Raycasting for Large Vertical Displays</div><p>Multiray is a concept that extends single raycasting for interacting with distant vertical displays to multi-finger raycasting, that is, each finger projects a ray onto the remote display. In particular, with multirays, patterns of ray intersections created by hand postures can form 2D geometric shapes to trigger actions and perform direct manipulations that go beyond single-point selections.</p>
<div class="relpub">Related Publication</div>
<ul class="disc">
<li>
<p class="p">Fabrice Matulic, Daniel Vogel, Multiray: Multi-Finger Raycasting for Large Displays, <em>CHI 2018</em>
(Paper: <a href="pubs/CHI2018.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>, video: <a target="_blank" href="videos/CHI2018.mp4"><img alt="video" src="images/videoicon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
</ul>
</td>
</tr>
<tr>
<td style="text-align: center;"><img style="width: 232px;" alt="" src="images/Modeswitch.jpg" /><br />
</td>
<td class="descr">
<div class="ptitle">Experimental Analysis of Mode Switching Techniques in Touch-based User Interfaces</div><p>This project looks at the performance of switching between different functions or modes for touch input (the possibility to rapidly change the output produced by the same touch action). Six techniques are evaluated in sitting and standing conditions: long press, non-dominant hand, two-fingers, hard press, knuckle, and thumb-on-finger. Our work addresses the lack of empirical evidence on the efficiency of touch mode-switching techniques and provides guidance to practitioners and researchers when designing new mode-switching methods.</p>
<div class="relpub">Related Publication</div>
<ul class="disc">
<li><p class="p">Hemant Bhaskar Surale, Fabrice Matulic and Daniel Vogel, Experimental Analysis of Mode Switching Techniques in Touch-based User Interfaces, <em>CHI 2017</em>
(Paper: <a href="pubs/CHI2017.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>, video: <a target="_blank" href="videos/CHI2017.mp4"><img alt="video" src="images/videoicon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
</ul>
</td>
</tr>
<tr>
<td rowspan="4" colspan="1" align="center"><img style="width: 120px; height: 172px;" alt="" src="images/IMLDLogo.png" /><br />
</td>
<td style="text-align: center;"><img style="width: 232px;" alt="" src="images/PalmFingersFanMenu.jpg" /><br />
</td>
<td class="descr">
<div class="ptitle">Hand and Finger Posture-Based Calling and Control of Tabletop Widgets</div><p>Tabletop interaction can be enriched by considering whole hands as input instead of only fingertips. In this work, we propose a straightforward, easily reproducible computer vision algorithm to recognise hand contact shapes from the raw touch contact image. The technique is able to discard resting arms and supports dynamic properties such as finger movement and hover. The algorithm is used to trigger, parameterise, and dynamically control menu and tool widgets.</p>
<div class="relpub">Related Publication</div>
<ul class="disc">
<li>
<p class="p">Fabrice Matulic, Daniel Vogel and Raimund Dachselt, Hand Contact Shape Recognition for Posture-Based Tabletop Widgets and Interaction, <em>ISS 2017</em>
(Paper: <a href="pubs/ISS2017.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>, video: <a target="_blank" href="videos/ISS2017.mp4"><img alt="video" src="images/videoicon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
</ul>
</td>
</tr>
<tr>
<td style="text-align: center;"><img style="width: 232px;" alt="" src="images/EmbeddedPres.jpg" /><br />
</td>
<td class="descr">
<div class="ptitle">Embodied Interactions for Novel Immersive Presentational Experiences</div><p>This project is about enhancing live multimedia presentations by integrating presenters in their presentation content as interactive avatars. Using multimodal input, especially body gestures, presenters control those embedded avatars through which they can interact with the virtual presentation environment in a fine-grained fashion, i.e. they are able to manipulate individual presentation elements and data as virtual props. The goal of this endeavour is to create novel immersive presentational experiences for live stage performances (talks, lectures etc.) as well as for remote conferencing in more confined areas such as offices and meeting rooms.</p>
<div class="relpub">Related Publication</div>
<ul class="disc">
<li>
<p class="p">Fabrice Matulic, Lars Engeln, Christoph Träger and Raimund Dachselt, Embodied Interactions for Novel Immersive Presentational Experiences, <em>CHI 2016 Late-Breaking Work</em>
(Extended Abstract: <a href="pubs/CHI2016LBW_EIP.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>, video: <a target="_blank" href="videos/EmbeddedPres.mp4"><img alt="video" src="images/videoicon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
</ul>
</td>
</tr>
<tr>
<td style="text-align: center;"><img style="width: 232px;" alt="" src="images/SmartProj.jpg" /><br />
</td>
<td class="descr">
<div class="ptitle">Smart Ubiquitous Projection: Discovering Adequate Surfaces for the Projection of Adaptive Content</div><p>In this work, we revisit the concept of ubuiquitous projection, where instead of considering every physical surface and object as a display, we seek to determine areas that are suitable for the projection and interaction with digital information. We achieve this using mobile projector-cameras units (procams) and a computer vision technique to automatically detect rectangular surface regions with properties that are desirable for projection (uniform, pale, non-reflective, planar etc.). In a next step, we explore body-based interactions to adaptively lay out content in those recognised areas.</p>
<div class="relpub">Related Publication</div>
<ul class="disc">
<li>
<p class="p">Fabrice Matulic, Wolfgang Büschel, Michael Ying Yang, Stephan Ihrke, Anmol Ramraika, Carsten Rother and Raimund Dachselt, Smart Ubiquitous Projection: Discovering Surfaces for the Projection of Adaptive Content, <em>CHI 2016 Late-Breaking Work</em>
(Extended Abstract: <a href="pubs/CHI2016LBW_SP.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>, video: <a target="_blank" href="videos/SmartProj.mp4"><img alt="video" src="images/videoicon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
</ul>
</td>
</tr>
<tr>
<td style="text-align: center;"><img style="width: 232px;" alt="" src="images/BodyLenses.jpg" /><br />
</td>
<td class="descr">
<div class="ptitle">BodyLenses – Embodied Magic Lenses and Personal Territories for Wall Displays</div>
<p>Magic lenses are popular tools to provide locally altered views of visual data. In this work, we introduce the concept of BodyLenses, special kinds of magic lenses for wall displays that are mainly controlled by body interactions. Using body position, arm gestures, distance to the display and classic multitouch on the screen, we show how parameters such as lens position, shape, function and tool selection can be dynamically and intuitively modified by users.</p>
<div class="relpub">Related Publication</div>
<ul class="disc">
<li>
<p class="p">Ulrike Kister, Patrick Reipschläger, Fabrice Matulic and Raimund Dachselt, BodyLenses – Embodied Magic Lenses and Personal Territories for Wall Displays, <em>ITS 2015</em>
(Paper: <a href="pubs/ITS2015Bodylenses.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>, video: <a target="_blank" href="videos/ITS2015Bodylenses.mp4"><img alt="video" src="images/videoicon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
</ul>
</td>
</tr>
<tr>
<td rowspan="1" colspan="1" align="center"><img style="width: 120px; height: 33px;" alt="" src="images/MSRLogo.png" /><br />
</td>
<td style="text-align: center;"><img style="width: 232px;" alt="" src="images/MSRGripSensing.jpg" /><br />
</td>
<td class="descr">
<div class="ptitle">Sensing Techniques for Tablet+Stylus Interaction</div>
<p>Using a special grip- and motion-sensitive stylus and a grip-sensitive tablet, we explore a range of novel pen and touch interactions including detecting how the user holds the pen and the tablet, distinguishing between the pen-holding hand and the bare hand, discarding touches caused by resting palms while writing (palm rejection) and a number of contextual gestures resulting from the detection of those different postures.</p>
<div class="relpub">Related Publication</div>
<ul class="disc">
<li>
<p class="p">Ken Hinckley, Michel Pahud, Hrvoje Benko, Pourang Irani, Francois Guimbretiere, Marcel Gavriliu1, Xiang 'Anthony' Chen, Fabrice Matulic, Bill Buxton and Andy Wilson, Sensing techniques for tablet+stylus interaction, <em>UIST 2014</em>
(Paper: <a href="http://dl.acm.org/citation.cfm?id=2647379"><img alt="pdf" src="images/acmdl.gif" border="0" style="vertical-align:middle"/></a>, video: <a target="_blank" href="https://youtu.be/9dgHgHQSuuY"><img alt="video" src="images/youtubeicon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
</ul>
</td>
</tr>
<tr>
<td rowspan="6" colspan="1" align="center"><img style="width: 120px; height: 20px;" alt="" src="images/ETHLogo.png" /><br />
</td>
<td style="text-align: center;"><img style="width: 232px;" alt="" src="images/Eyes-free_whiteboard.jpg" /><br />
</td>
<td class="descr">
<div class="ptitle">Handheld Devices as Eyes-Free Touch Toolboxes for Pen-Based Interactive Whiteboards</div>
<p>In this project, we investigate how smartphones can be used as portable quick-access toolboxes held by the non-dominant hand to provide assistive touch commands for pen-driven whiteboard tasks. In particular, we consider an eyes-free UI design, which allows users to operate the handheld device in a blind manner, i.e. without having to look at it, thereby allowing them to concentrate on the pen task.</p>
<div class="relpub">Related Publication</div>
<ul class="disc">
<li>
<p class="p">Fabrice Matulic, Maria Husmann, Seraiah Walter and Moira C. Norrie, Eyes-Free Touch Command Support for Pen-Based Digital Whiteboards via Handheld Devices, <em>ITS 2015</em>
(Paper: <a href="pubs/ITS2015Whiteboard.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>, video: <a target="_blank" href="videos/ITS2015Whiteboard.mp4"><img alt="video" src="images/videoicon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
</ul>
</td>
</tr>
<tr>
<td style="text-align: center;"><img style="width: 232px;" alt="" src="images/PTMap.jpg" /><br />
</td>
<td class="descr">
<div class="ptitle">Pen-Based Spatial Queries on Interactive Maps</div>
<p>In this work, we present and evaluate a set of pen-based techniques to annotate maps on tablets or interactive tabletops and selectively convert those annotations into spatial queries allowing users to search for points of interests within explicitly or implicitly specified scopes, e.g. look for restaurants, hotels etc. within circled areas or along sketched paths or calculated routes.</p>
<div class="relpub">Related Publication</div>
<ul class="disc">
<li>
<p class="p">Fabrice Matulic, David Caspar and Moira C. Norrie, Spatial Querying of Geographical Data with Pen-Input Scopes, <em>ITS 2014</em>
(Paper: <a href="pubs/ITS2014.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>, video: <a target="_blank" href="videos/ITS2014.avi"><img alt="video" src="images/videoicon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
</ul>
</td>
</tr>
<tr>
<td style="text-align: center;"><img style="width: 232px;" alt="" src="images/PTEditor.jpg" /></br></br><img style="width: 232px;" alt="" src="images/AR_application2.jpg" /></td>
<td class="descr">
<div class="ptitle">Document Engineering on Pen and Touch Tabletops</div>
<p>Digital tabletops operated using hybrid pen and touch
input provide rich interaction possibilities. As interactive workdesks within the
office of the future, they stand to support knowledge workers in a number of productivity tasks, many of which are likely to involve documents. This project aims to leverage the potential of those systems to support document-centric activities, especially the editing and authoring of documents. In particular, the practicality of post-WIMP designs based on bimanual gestures is explored.</p>
<div class="relpub">Related Publications </div>
<ul class="disc">
<li>
<p class="p">Fabrice Matulic, Towards Document Engineering on Pen and Touch-Operated Interactive Tabletops, <em>ETH PhD Thesis 2014</em> (Thesis: <a href="pubs/Thesis.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
<li>
<p class="p">Fabrice Matulic and Moira C. Norrie, Pen and Touch Gestural Environment for Document Editing on Interactive Tabletops, <em>ITS 2013</em> (Paper: <a href="pubs/ITS2013.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>, video: <a target="_blank" href="videos/ITS2013.mp4"><img alt="video" src="images/videoicon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
<li>
<p class="p">Fabrice Matulic, Beyond WIMP: Designing NUIs to Support Productivity Document Tasks, <em>CHI 2013 Workshop "Blended Interaction"</em>
(Extended Abstract: <a href="pubs/CHI2013WS.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
<li>
<p class="p">Fabrice Matulic, Moira C. Norrie, Ihab Al Kabary and Heiko Schuldt, Gesture-Supported Document Creation on Pen and Touch Tabletops, <em>CHI 2013 Works in Progress</em>
(Extended Abstract: <a href="pubs/CHI2013WIP.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
<li>
<p class="p">Fabrice Matulic, Towards Document Engineering
on Pen and Touch-Operated Interactive Tabletops, <em>Doctoral
Symposium of UIST 2012 </em>(Extended Abstract: <a href="pubs/UIST2012DS.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
<li>
<p class="p">Fabrice Matulic and Moira C. Norrie, Supporting
Active Reading on Pen and Touch-Operated Tabletops, <em>AVI 2012 </em>(Paper: <a href="pubs/AVI2012.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>, video: <a target="_blank" href="videos/Supporting_AR.mp4"><img alt="video" src="images/videoicon.png" border="0" style="vertical-align:middle"/></a>) </p>
</li>
</ul>
<p class="p">The active reading application was also featured twice on national Swiss TV (SF 1):</p>
<ul>
<li>2.1.2012, in the business programme "Eco": <a target="_blank" href="http://www.videoportal.sf.tv/video?id=9fa222d4-63cd-4d5f-b5a5-a533ea8b6310">Video on SF web site (in German)</a> (Application is shown between 2:12 and 2:50)</li>
<li>20.10.2011, in the science programme "Einstein": <a target="_blank" href="videos/SF_Einstein.mkv">excerpt in (Swiss) German with English subtitles</a></li>
</ul>
<p class="p">Also check this UML diagram creation tool some of my students developed in a lab project: <a target="_blank" href="http://www.youtube.com/watch?v=hsZOsjf5un4">Youtube video</a></p>
</td>
</tr>
<tr>
<td style="text-align: center;"><img style="width: 232px;" alt="" src="images/PTProperties.png" /><br />
</td>
<td class="descr">
<div class="ptitle">Properties of Pen and Touch Input</div>
<p>Combined bimanual pen and touch input is a relatively new interaction paradigm with promising prospects. Its properties are not yet well understood and hence merit to be studied. This project experimentally investigates and reports on some important issues of pen and touch input on horizontal surfaces, including aspects of speed, accuracy and coordination.</p>
<div class="relpub">Related Publication</div>
<ul class="disc">
<li>
<p class="p">Fabrice Matulic and Moira C. Norrie, Empirical
Evaluation of Uni- and Bimodel Pen and Touch Interaction Properties on
Digital Tabletops, <em>ITS 2012</em>
(Paper: <a href="pubs/ITS2012.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>, video: <a target="_blank" href="videos/PT_Study.mp4"><img alt="video" src="images/videoicon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
</ul>
</td>
</tr>
<tr>
<td style="text-align: center;"><img style="width: 232px;" alt="" src="images/AdaptiveWP.jpg" /><br />
</td>
<td class="descr">
<div class="ptitle">Adaptive Web-Page Layout for Large Screens</div>
<p>The vast majority of web pages adapt very poorly to large displays, especially widescreens. We propose techniques to produce and evaluate adaptive web pages using web standards (and especially features of HTML5 and CSS3). We address issues such as multi-column layouts, scale-dependent element selection and positioning, font size, line lengths etc.</p>
<div class="relpub">Related Publications </div>
<ul class="disc">
<li>
<p class="p">Michael Nebeling, Fabrice Matulic, Lucas Streit and Moira C. Norrie, Adaptive Layout Template for Effective Web Content Presentation in Large-Screen Contexts, <em>DocEng 2011</em> (Paper: <a href="pubs/DocEng2011.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>) </p>
</li>
<li>
<p class="p">Michael Nebeling, Fabrice Matulic and Moira C. Norrie, Metrics for the Evaluation of News Site Content Layout in Large-Screen Contexts, <em>CHI 2011</em> (Paper: <a href="pubs/CHI2011.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
</ul>
</td>
</tr>
<tr>
<td style="text-align: center;"><img style="width: 232px;" alt="" src="images/SalientPages.png" /><br />
</td>
<td class="descr">
<div class="ptitle">Automatic Extraction of Visually Salient Pages of Large Documents</div>
<p>This technique attempts to automatically select a given number of pages from a document that visually "stand out" with a view to including them in a document list with thumbnails of sample pages (e.g. a catalogue or an online book store). The algorithm considers a set of low-level features such as element block sizes and tone saliency to determine pages that are more likely to attract attention. A smoothing function is available to inject some level of spread in the culling process.</p>
<div class="relpub">Related Publications </div>
<ul class="disc">
<li>
<p class="p">Fabrice Matulic, Image-Based Technique To Select Visually Salient Pages In Large Documents, <em>JDIM 7(5) 2009</em> (Paper: <a href="pubs/JDIM2009.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
<li>
<p class="p">Fabrice Matulic, Automatic Selection of Visually Attractive Pages for Thumbnail Display in Document List View, <em>ICDIM 2008 </em>(Paper<a href="pubs/ICDIM2008.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
</ul>
</td>
</tr>
<tr style="border-top:solid">
<td rowspan="8" colspan="1" align="center"><img style="width: 120px;" alt="" src="images/Ricoh_logo.png" /><br />
</td>
<td style="text-align: center;"><img style="width: 232px;" alt="" src="images/TouchScanSearch.jpg" /><br />
</td>
<td class="descr">
<div class="ptitle">Touch Scan-n-Search: A Touchscreen Interface To Retrieve Online Versions of Scanned Documents</div>
<p>This system tackles the problem of finding online content based on paper documents through an intuitive touchscreen interface designed for modern scanners and multifunction printers. Touch Scan-n-Search allows the user to select elements of a scanned document (e.g. a newspaper article) and seamlessly connect to common web search services in order to retrieve the online version of the document along with related content. This is achieved by automatically extracting keyphrases from text elements in the document (obtained by OCR) and creating tappable GUI widgets to allow the user to control and fine-tune the search requests.</p>
<div class="relpub">Related Publications </div>
<ul class="disc">
<li>
<p class="p">Fabrice Matulic, Touch Scan-n-Search: A Touchscreen Interface To Retrieve Online Versions of Scanned Documents, <em>DocEng 2007</em> (Demo abstract: <a href="pubs/DocEng2007.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
<li><p class="p">US Patent Application 20080115080: <a target="_blank" href="http://www.freepatentsonline.com/y2008/0115080.html">Device, method and computer program product for information retrieval</a></p></li>
<li><p class="p">Japan Patent 2008-140377: <a target="_blank" href="http://www.freepatentsonline.com/JP2008140377.html">Information retrieving device, method and program</a></p></li>
</ul>
</td>
</tr>
<tr>
<td style="text-align: center;"><img style="width: 232px;" alt="" src="images/SmartPublisher.jpg" /></span><br />
</td>
<td class="descr">
<div class="ptitle">SmartPublisher - Document Creation on Pen-Based Systems Via Document Element Reuse</div>
<p>SmartPublisher is a powerful, all-in-one application for pen-based devices with which users can quickly and intuitively create new documents by reusing individual image and text elements acquired from analogue and/or digital documents. The application is especially targeted at scanning devices with touch screen operating panels or tablet PCs connected to them (e.g. modern multifunction printers with large touch screen displays), as one of its main purposes is reuse of material obtained from scanned paper documents.</p>
<div class="relpub">Related Publications </div>
<ul class="disc">
<li>
<p class="p">Fabrice Matulic, SmartPublisher - Document Creation on Pen-Based Systems Via Document Element Reuse, <em>DocEng 2006</em> (Demo Abstract: <a href="pubs/DocEng2006.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>)</p>
</li>
<li><p class="p">US Patent 8139257: <a target="_blank" href="http://www.freepatentsonline.com/8139257.html">Document editing apparatus, image forming apparatus, document editing method, and computer program product</a></p></li>
<li>
<p class="p">Japan Patent 2007-150858: <a target="_blank" href="http://www.freepatentsonline.com/JP2007150858.html">Document editing apparatus, image forming apparatus, document editing method and program to make computer execute method</a></p></li>
</ul>
</td>
</tr>
<tr>
<td style="text-align: center;"><img style="width: 232px;" alt="" src="images/LayoutTemplate.jpg" /></td>
<td class="descr">
<div class="ptitle">Document Layout Recognition and Template Matching</div>
<p>This system allows the user to draw rough frames with a stylus or use a scanned drawing to create placeholders for content to be inserted in (e.g. photos for a photo album). Based on the hand-drawn shapes, queries to search for matching templates can also be issued. The user can then select an appropriate template and automatically map content to its placeholders.</p>
<div class="relpub">Related Publications </div>
<ul class="disc">
<li>
<p class="p">US Patent 8165404: <a target="_blank" href="http://www.freepatentsonline.com/8165404.html">Method and apparatus for creating document data, and computer program product</a></p>
</li>
<li>
<p class="p">Japan Patent 2009-093628: <a target="_blank" href="http://www.freepatentsonline.com/JP2009075651.html">Document utilization support device, document utilization support method and document utilization support program</a></p>
</li>
</ul></td>
</tr>
<tr>
<td style="text-align: center;"><img style="width: 232px;" alt="" src="images/ContentsBar.png" /><br />
</td>
<td class="descr">
<div class="ptitle">Advanced UI for Efficient Document Element Transfer on High-End Multifunction Printers</div>
<p>This application designed for pen-operated multifunction printers integrates 2 modules that support users to send and share elements of scanned documents. The advanced scan2E-Mail function allows users to send only the desired portion of the scanned document as well as extracted text directly in the E-Mail body. The size and compression level of the sent content can also be adapted to the target recipient device (e.g. a mobile phone). The second module gives users the possibility to gather and send document elements to their work PC. The contents appear in a sidebar from which they can be dragged and dropped into desktop applications such as a word processor.</p>
<div class="relpub">Related Publications </div>
<ul class="disc">
<li><p class="p">US Patent 8201072: <a target="_blank" href="http://www.freepatentsonline.com/8201072.html">Image forming apparatus, electronic mail delivery server, and information processing apparatus</a></p></li>
<li><p class="p">US Patent Application 20120224232: <a target="_blank" href="http://www.freepatentsonline.com/y2012/0224232.html">Image Forming Apparatus, Electronic Mail Delivery Server, and Information Processing Apparatus</a></p></li>
<li><p class="p">US Patent Application 20070220425: <a target="_blank" href="http://www.freepatentsonline.com/y2007/0220425.html">Electronic mail editing device, image forming apparatus and electronic mail editing method</a></p></li>
</ul>
</td>
</tr>
<tr>
<td style="text-align: center;"><img style="width: 232px;" alt="" src="images/VideoPrinting.jpg" /><br />
</td>
<td class="descr">
<div class="ptitle">Printing Web Pages with Embedded Videos</div>
<p>Attempting to print a web page with embedded multimedia content with a standard web browser will at best return a printout with a single frame in lieu of the video. This technique, meant as a browser plugin, extracts a number of relevant frames to be included in the printout to recover some of the lost context of the video. The result is a document containing strips of representative movie frames at the location of the video or at the end of the document.</p>
<div class="relpub">Related Publication</div>
<ul class="disc">
<li>
<p class="p">Japan Patent 2009-065339: <a target="_blank" href="http://www.freepatentsonline.com/JP2009065339.html">Device, system, method and program for generating print data</a></p>
</li>
</ul>
</td>
</tr>
<tr>
<td align="center"><object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" width="232" id="FlashID" title="Interactive Animated Document Icons">
<param name="movie" value="images/IADI.swf" />
<param name="quality" value="high" />
<param name="wmode" value="opaque" />
<param name="swfversion" value="8.0.35.0" />
<!-- This param tag prompts users with Flash Player 6.0 r65 and higher to download the latest version of Flash Player. Delete it if you don’t want users to see the prompt. -->
<param name="expressinstall" value="Scripts/expressInstall.swf" />
<!-- Next object tag is for non-IE browsers. So hide it from IE using IECC. -->
<!--[if !IE]>-->
<object type="application/x-shockwave-flash" data="images/IADI.swf" width="232" >
<!--<![endif]-->
<param name="quality" value="high" />
<param name="wmode" value="opaque" />
<param name="swfversion" value="8.0.35.0" />
<param name="expressinstall" value="Scripts/expressInstall.swf" />
<!-- The browser displays the following alternative content for users with Flash Player 6.0 and older. -->
<div>
<h4>Content on this page requires a newer version of Adobe Flash Player.</h4>
<p><a href="http://www.adobe.com/go/getflashplayer"><img src="http://www.adobe.com/images/shared/download_buttons/get_flash_player.gif" alt="Get Adobe Flash player" width="112" height="33" /></a></p>
</div>
<!--[if !IE]>-->
</object>
<!--<![endif]-->
</object><p style="font-size:0.7em">(click and use arrow keys to turn pages)</p>
</td>
<td class="descr">
<div class="ptitle">Interactive Animated Document Icons</div>
<p>Interactive Animated Document Icons or IADIs are full documents rendered in thumbnail size to be integrated in document lists or file browsers. Pages of an IADI can be "turned" following a user-defined trigger (mouse hover, wheel or keyboard). A magnifying function is also available to zoom the pages for quick previews.</p>
<div class="relpub">Related Publications </div>
<ul class="disc">
<li>
<p class="p">US Patent Application 20090183114: <a target="_blank" href="http://www.freepatentsonline.com/y2009/0183114.html">Information processing apparatus and computer program product</a></p>
</li>
<li>
<p class="p">Japan Patent 2009-169537: <a target="_blank" href="http://www.freepatentsonline.com/JP2009169537.html">Information processor, symbol display method and symbol display program</a></p>
</li>
</ul>
</td>
</tr>
<tr>
<td style="text-align: center;"><img style="width: 232px;" alt="" src="images/DocumentCompiler.jpg" /><br />
</td>
<td class="descr">
<div class="ptitle">Automatic Document Compiler</div>
<p>The goal of this project is to provide a comprehensive solution to gather and aggregate content relevant to the user from heterogeneous sources (e.g. news articles about a particular topic) and compile the elements into a single coherent document with an appropriate layout. Several criteria are considered to define the constraints used to produce the layout: user preferences, target medium constraints, aesthetic layout rules and semantic similarity between items. </p>
<div class="relpub">Related Publications </div>
<ul class="disc">
<li>
<p class="p">US Patent Application 20090180126: <a href="http://www.freepatentsonline.com/y2009/0180126.html" target="_blank">Information processing apparatus, method of generating document, and computer-readable recording medium</a></p>
</li>
<li>
<p class="p">Japan Patent 2009-169536: <a href="http://www.freepatentsonline.com/JP2009169536.html" target="_blank">Information processor, image forming apparatus, document creating method and document creating program</a></p>
</li>
</ul>
</td>
</tr>
<tr>
<td style="text-align: center;"><img style="width: 232px;" alt="" src="images/SmartNavi.png" /><br />
</td>
<td class="descr">
<div class="ptitle">Document Element Extraction and Search</div>
<p>This work deals with the extraction of document components from existing office documents in order to populate a database of reusable elements. Those elements can then be retrieved via an ad hoc web interface (using keywords, but also content-based searching) and inserted into new documents.</p>
<div class="relpub">Related Publications </div>
<ul class="disc">
<li>
<p class="p">Japan Patent 2009-075651: <a href="http://www.freepatentsonline.com/JP2009075651.html" target="_blank">Document utilization support device, document utilization support method and document utilization support program</a></p>
</li>
<li>
<p class="p">Japan Patent 2008-071311 : <a href="http://www.freepatentsonline.com/JP2008071311.html" target="_blank">Image retrieval apparatus, image retrieval method, image retrieval program and information storage medium</a></p>
</li>
</ul>
</td>
</tr>
</tbody>
</table></div>
<div id="tabs-2">
<h3>Peer-reviewed publications</h3>
<ul class="disc">
<li>
<p class="p">Fabrice Matulic, Taiga Kashima, Deniz Beker, Daichi Suzuo, Hiroshi Fujiwara and Daniel Vogel, <strong>Above-Screen Fingertip Tracking with a Phone in Virtual Reality</strong>, <em>Proc. Graphics Interface (GI 2024), Halifax, NS, Canada, May 2024</em>: <a href="pubs/GI2024.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Wataru Kawabe, Taisuke Hashimoto, Fabrice Matulic, Takeo Igarashi, Keita Higuchi. <strong>Interactive Material Annotation on 3D Scanned Models leveraging Color-Material Correlation</strong>, <em>SIGGRAPH Asia 2023 Technical Communications</em>: <a href="pubs/SIGGRAPHAsia2023TC.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Kotaro Oomori, Wataru Kawabe, Fabrice Matulic, Takeo Igarashi, Keita Higuchi.<strong> Interactive 3D Annotation of Objects in Moving Videos from Sparse Multi-view Frames</strong>, <em>Proc. of the ACM on Human-Computer Interaction, Volume 7, Issue ISS (ISS 2023), Pittsburgh, USA, November 2023</em>: <a href="pubs/ISS2023.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Fabrice Matulic and Daniel Vogel, <strong>Pen+Touch+Midair: Cross-Space Hybrid Bimanual Interaction on Horizontal Surfaces in Virtual Reality</strong>, <em>Proc. Graphics Interface (GI 2023), Victoria, BC, Canada, May 2023</em>: <a href="pubs/GI2023.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Yen-Ting Yeh, Fabrice Matulic and Daniel Vogel, <strong>Phone Sleight of Hand: Finger-Based Dexterous Gestures for Physical Interaction with Mobile Phones</strong>, <em>Proc. Conference on Human Factors in Computing Systems (CHI 2023), Hamburg, Germany, April 2023</em>: <a href="pubs/CHI2023.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Fabrice Matulic, Taiga Kashima, Deniz Beker, Daichi Suzuo, Hiroshi Fujiwara and Daniel Vogel, <strong>Above-Screen Fingertip Tracking with a Phone in Virtual Reality</strong>, <em>Proc. Conference on Human Factors in Computing Systems Extended Abstracts (CHI 2023 Late Breaking Work), Hamburg, Germany, April 2023</em>: <a href="pubs/CHI2023LBW.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Keita Higuchi, Taiyo Mizuhashi, Fabrice Matulic and Takeo Igarashi, <strong>Interactive Generation of Image Variations for Copy-Paste Data Augmentation</strong>, <em>Proc. Conference on Human Factors in Computing Systems Extended Abstracts (CHI 2023 Late Breaking Work), Hamburg, Germany, April 2023</em>: <a href="pubs/CHI2023LBW2.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Fabrice Matulic and Daniel Vogel, <strong>Terrain Modelling with a Pen & Touch Tablet and Mid-Air Gestures in Virtual Reality</strong>, <em>Proc. Conference on Human Factors in Computing Systems Extended Abstracts (CHI 2022 Late Breaking Work), New Orleans, USA, April 2022</em>: <a href="pubs/CHI2022LBW.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Nalin Chhibber, Hemant Bhaskar Surale, Fabrice Matulic and Daniel Vogel, <strong>Typealike: Near-Keyboard Hand Postures for Expanded Laptop Interaction</strong>, <em>Proc. of the ACM on Human-Computer Interaction, Volume 5, Issue ISS (ISS 2021), Łódź, Poland, November 2021</em> (Honourable Mention Award): <a href="pubs/ISS2021a.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a></p>
</li>
<li>
<p class="p">Terence Dickson, Rina R. Wehbe, Fabrice Matulic, Daniel Vogel, <strong>HybridPointing for Touch: Switching Between Absolute and Relative Pointing on Large Touch Screens</strong>, <em>Proc. of the ACM on Human-Computer Interaction, Volume 5, Issue ISS (ISS 2021), Łódź, Poland, November 2021</em>: <a href="pubs/ISS2021b.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a></p>
</li>
<li>
<p class="p">Fabrice Matulic and Daniel Vogel, <strong>Deep Learning-Based Hand Posture Recognition for Pen Interaction Enhancement</strong>, <em>Artificial Intelligence for Human Computer Interaction: A Modern Approach, Springer HCIS 2021</em>: <a href="https://link.springer.com/chapter/10.1007/978-3-030-82681-9_7"><img alt="pdf" src="images/Springericon.png" border="0" style="vertical-align:middle"/></a></p>
</li>
<li>
<p class="p">Fabrice Matulic, Aditya Ganeshan, Hiroshi Fujiwara and Daniel Vogel, <strong>Phonetroller: Visual Representations of Fingers for Precise Touch Input with Mobile Phones in VR</strong>, <em>Proc. Conference on Human Factors in Computing Systems (CHI 2021), Yokohama, Japan, May 2021</em>: <a href="pubs/CHI2021.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Fabrice Matulic, Riku Arakawa, Brian Vogel and Daniel Vogel, <strong>PenSight: Enhanced Interaction with a Pen-Top Camera</strong>, <em>Proc. Conference on Human Factors in Computing Systems (CHI 2020), Honolulu, HI, USA, April 2020</em> (Best Paper Award): <a href="pubs/CHI2020.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Fabrice Matulic, Brian Vogel, Naoki Kimura and Daniel Vogel, <strong>Eliciting Pen-Holding Postures for General Input with Suitability for EMG Armband Detection</strong>, <em>Proc. ACM International Conference on Interactive Surfaces and Spaces (ISS 2019), Deajeon, Republic of Korea, November 2019</em>:
<a href="pubs/ISS2019.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Hemant Bhaskar Surale, Fabrice Matulic and Daniel Vogel, <strong>Experimental Analysis of Barehand Mid-air Mode-Switching Techniques in Virtual Reality</strong>, <em>Proc. Conference on Human Factors in Computing Systems (CHI 2019), Glasgow, Scotland, UK, May 2019</em> (Honourable Mention Award): <a href="pubs/CHI2019.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Naoya Yoshimura, Hironori Yoshida, Fabrice Matulic and Takeo Igarashi, <strong>Extending Discrete Verbal Commands with Continuous Speech for Flexible Robot Control</strong>, <em>Proc. Extended Abstracts on Human Factors in Computing Systems (CHI 2019), Glasgow, Scotland, UK, May 2019</em>:
<a href="pubs/CHI2019LBW.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Fabrice Matulic, Yuta Kikuchi and Jason Naradowsky, <strong>Enabling Customer-Driven Learning and Customisation Processes for ML-Based Domestic Robots</strong>, <em>HCML Perspectives Workshop at Conference on Human Factors in Computing Systems (CHI 2019), Glasgow, Scotland, UK, May 2019</em>:
<a href="pubs/CHI2019HCMLWorkshop.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Fabrice Matulic, <strong>ColourAIze: AI-Driven Colourisation of Paper Drawings with Interactive Projection System</strong>, <em>Proc. ACM International Conference on Interactive Surfaces and Spaces (ISS 2018), Tokyo, Japan, November 2018</em>:
<a href="pubs/ISS2018.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Drini Cami, Fabrice Matulic, Richard G. Calland, Brian Vogel, Daniel Vogel, <strong>Unimanual Pen+Touch Input Using Variations of Precision Grip Postures</strong>, <em>Proc. ACM Symposium on User Interface Software and Technology (UIST 2018), Berlin, Germany, October 2018</em>:
<a href="pubs/UIST2018.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li><p class="p">Fabrice Matulic, Daniel Vogel, <strong>Multiray: Multi-Finger Raycasting for Large Displays</strong>, <em>Proc. Conference on Human Factors in Computing Systems (CHI 2018), Montréal, QC, Canada, April 2018</em> (Honourable Mention Award): <a href="pubs/CHI2018.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li><p class="p">Fabrice Matulic, Daniel Vogel and Raimund Dachselt, <strong>Hand Contact Shape Recognition for Posture-Based Tabletop Widgets and Interaction</strong>, <em>Proc. ACM International Conference on Interactive Surfaces and Spaces (ISS 2017), Brighton, UK, October 2017</em>: <a href="pubs/ISS2017.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li><p class="p">Hemant Bhaskar Surale, Fabrice Matulic and Daniel Vogel, <strong>Experimental Analysis of Mode Switching Techniques in Touch-based User Interfaces</strong>, <em>Proc. Conference on Human Factors in Computing Systems (CHI 2017), Denver, CO, USA, May 2017</em>: <a href="pubs/CHI2017.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Fabrice Matulic, Lars Engeln, Christoph Träger and Raimund Dachselt, <strong>Embodied Interactions for Novel Immersive Presentational Experiences</strong>, <em>Proc. Extended Abstracts on Human Factors in Computing Systems (CHI 2016), San Jose, CA, USA, May 2016</em>:
<a href="pubs/CHI2016LBW_EIP.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Fabrice Matulic, Wolfgang Büschel, Michael Ying Yang, Stephan Ihrke, Anmol Ramraika, Carsten Rother and Raimund Dachselt, <strong>Smart Ubiquitous Projection: Discovering Surfaces for the Projection of Adaptive Content</strong>, <em>Proc. Extended Abstracts on Human Factors in Computing Systems (CHI 2016), San Jose, CA, USA, May 2016</em>:
<a href="pubs/CHI2016LBW_SP.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Fabrice Matulic, Maria Husmann, Seraiah Walter and Moira C. Norrie, <strong>Eyes-Free Touch Command Support for Pen-Based Digital Whiteboards via Handheld Devices</strong>, <em>Proc. ACM Interactive Tabletops and Surfaces 2015 Conference (ITS 2015), Funchal, Madeira, Portugal, November 2015</em>:
<a href="pubs/ITS2015Whiteboard.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Fabrice Matulic, Ulrich von Zadow and Raimund Dachselt, <strong>Interaction Design for Large Vertical vs. Horizontal Displays: Open Issues</strong>, <em>Workshop on Interaction on Large Displays at ACM Interactive Tabletops and Surfaces 2015 Conference (ITS 2015), Funchal, Madeira, Portugal, November 2015</em>:
<a href="pubs/ITS2015Workshop.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Ulrike Kister, Patrick Reipschläger, Fabrice Matulic and Raimund Dachselt, <strong>BodyLenses – Embodied Magic Lenses and Personal Territories for Wall Displays</strong>, <em>Proc. ACM Interactive Tabletops and Surfaces 2015 Conference (ITS 2015), Funchal, Madeira, Portugal, November 2015</em>:
<a href="pubs/ITS2015Bodylenses.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Fabrice Matulic, David Caspar and Moira C. Norrie, <strong>Spatial Querying of Geographical Data
with Pen-Input Scopes</strong>, <em>Proc. ACM Interactive Tabletops and Surfaces 2014 Conference (ITS 2014), Dresden, Germany, November 2014</em>:
<a href="pubs/ITS2014.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Ken Hinckley, Michel Pahud, Hrvoje Benko, Pourang Irani, François Guimbretière, Marcel Gavriliu, Xiang 'Anthony' Chen, Fabrice Matulic, Bill Buxton and Andy Wilson, <strong>Sensing Techniques for Tablet+Stylus Interaction</strong>, <em>Proc. ACM Symposium on User Interface Software and Technology (UIST 2014), Honolulu, HI, USA, October 2014 (Best Paper Award)</em>:
<a href="http://dl.acm.org/citation.cfm?id=2647379"><img alt="pdf" src="images/acmdl.gif" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Fabrice Matulic, Towards Document Engineering on Pen and Touch-Operated Interactive Tabletops, <em>ETH PhD Thesis 2014</em>: <a href="pubs/Thesis.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a></p>
</li>
<li>
<p class="p">Ihab Al Kabary, Ivan Giangreco, Heiko Schuldt, Fabrice Matulic and Moira C. Norrie, <strong>QUEST: Towards a Multi-Modal CBIR Framework Combining Query-by-Example, Query-by-Sketch, and Text Search</strong>, <em>Proc. 9th IEEE International Workshop on Multimedia Information Processing and Retrieval (IEEE MIPR2013), Anaheim CA, USA, December 2013 </em>:
<a href="pubs/MIPR.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Fabrice Matulic and Moira C. Norrie, <strong>Pen and Touch Gestural Environment for Document Editing on Interactive Tabletops</strong>, <em>Proc. ACM Interactive Tabletops and Surfaces 2013 Conference (ITS 2013), St Andrews, Scotland, UK, October 2013</em>:
<a href="pubs/ITS2013.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Fabrice Matulic, <strong>Beyond WIMP: Designing NUIs to Support Productivity Document Tasks</strong>, <em>Blended Interaction, Envisioning Future Collaborative Interactive Spaces, CHI 2013 Workshop, Paris, France, April 2013</em>:
<a href="pubs/CHI2013WS.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Fabrice Matulic, Moira C. Norrie, Ihab Al Kabary and Heiko Schuldt, <strong>Gesture-Supported Document Creation on Pen and Touch Tabletops</strong>, <em>CHI 2013 Extended Abstracts on Human Factors in Computing Systems, Works-in-Progress, Paris, France, April 2013</em>:
<a href="pubs/CHI2013WIP.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Fabrice Matulic and Moira C. Norrie, <strong>Empirical Evaluation of Uni- and Bimodel Pen and Touch Interaction Properties on Digital Tabletops</strong>, <em>Proc. ACM Interactive Tabletops and Surfaces Conference (ITS 2012), Cambridge (MA), USA, November 2012</em>:
<a href="pubs/ITS2012.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Fabrice Matulic, <strong>Towards Document Engineering on Pen and Touch-Operated Interactive Tabletops</strong>, <em>Doctoral Symposium of the 25th ACM Symposium on User Interface Software and Technology (UIST 2012), Cambridge (MA), USA, October 2012</em>:
<a href="pubs/UIST2012DS.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Fabrice Matulic and Moira C. Norrie, <strong>Supporting Active Reading on Pen and Touch-Operated Tabletops</strong>, <em>Proc. International Working Conference on Advanced Visual Interfaces (AVI 2012), Capri Island, Italy, May 2012</em>:
<a href="pubs/AVI2012.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Michael Nebeling, Fabrice Matulic, Lucas Streit and Moira C. Norrie, <strong>Adaptive Layout Template for Effective Web Content Presentation in Large-Screen Contexts</strong>, <em>Proc. 2011 ACM Symposium on Document Engineering (DocEng 2011), Mountain View, CA, USA, September 2011</em>:
<a href="pubs/DocEng2011.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Michael Nebeling, Fabrice Matulic and Moira C. Norrie, <strong>Metrics for the Evaluation of News Site Content Layout in Large-Screen Contexts</strong>, <em>Proc. ACM Conference on Human Factors in Computing Systems (CHI 2011), Vancouver, BC, Canada, May 2011 (Honorable Mention Award)</em>:
<a href="pubs/CHI2011.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Fabrice Matulic, <strong>Image-Based Technique To Select Visually Salient Pages In Large Documents</strong>, <em>Journal of Digital Information Management, Vol. 7, Issue 5, Oct. 2009</em>:
<a href="pubs/JDIM2009.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Fabrice Matulic, <strong>Automatic Selection of Visually Attractive Pages for Thumbnail Display in Document List View</strong>, <em>Proc. Third International Conference on Digital Information Management (ICDIM 2008), London, UK</em>:
<a href="pubs/ICDIM2008.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Fabrice Matulic, <strong>Touch Scan-n-Search: A Touchscreen Interface To Retrieve Online Versions of Scanned Documents</strong>, <em>Proc. 2007 ACM symposium on Document engineering (DocEng 2007), Winnipeg, Manitoba, Canada</em>:
<a href="pubs/DocEng2007.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
<li>
<p class="p">Fabrice Matulic, <strong>SmartPublisher - Document Creation on Pen-Based Systems Via Document Element Reuse</strong>, <em>Proc. 2006 ACM symposium on Document engineering (DocEng 2006), Amsterdam, The Netherlands</em>:
<a href="pubs/DocEng2006.pdf"><img alt="pdf" src="images/pdf-icon.png" border="0" style="vertical-align:middle"/></a>
</p>
</li>
</ul>
<p style="text-align:center">
<img src="images/LineSeparator.png" border="0">
</p>
<h3 class="heading">Patents</h3>
<div class="tablemargin">
<table class="silvatable plain" cellpadding="3" cellspacing="0" width="100%">
<colgroup><col class="align-left" width="12%">
<col class="align-left" width="87%">
</colgroup><tbody><tr>
<td class="align-left">9060085</td>
<td class="align-left"><a href="http://www.freepatentsonline.com/9060085.html" target="_blank">Image forming apparatus, electronic mail delivery server, and information processing apparatus</a><br/>
An image forming apparatus includes an information delivery apparatus including an analysis unit and a first display controller. The analysis unit analyzes data to extract data elements. The first... </td>
</tr>
<tr>
<td class="align-left">8726178</td>
<td class="align-left"><a href="http://www.freepatentsonline.com/8726178.html" target="_blank">Device, method, and computer program product for information retrieval</a><br/>
An information retrieval device includes an area splitting unit that splits the input information into a plurality of subareas by each information attribute, an extracting unit that extracts a... </td>
</tr>
<tr>
<td class="align-left">US20120224232</td>
<td class="align-left"><a href="http://www.freepatentsonline.com/y2012/0224232.html" target="_blank">Image Forming Apparatus, Electronic Mail Delivery Server, and Information Processing Apparatus</a><br/>
An image forming apparatus includes an information delivery apparatus including an analysis unit and a first display controller. The analysis unit analyzes data to extract data elements. The first... </td>
</tr>
<tr>
<td class="align-left">US20120162684</td>