In Proceedings of CHI 2016

SelPh:

Progressive Learning and Support of Manual Photo Color Enhancement

小山 裕己坂本 大介五十嵐 健夫(東京大学)

概要:写真の色調を編集するためのシステム「SelPh」を提案する.このシステムは,ユーザの好みを編集履歴から少しずつ(Progressiveに)学習していく.また,学習結果を利用したスライダのガイド機能など,いくつかのユーザ支援機能を提供する.

CHI 2016

SelPh:

Progressive Learning and Support of Manual Photo Color Enhancement

小山 裕己
坂本 大介
五十嵐 健夫

東京大学

本論文で提案するself-reinforcing color enhancement (自己強化していく色調編集環境) のコンセプト図.色調編集作業が進むにつれて,システムは暗黙的に,そして漸進的にユーザの好みを学習していく.それに伴い,システムはより効果的にユーザを支援することができるようになる.

プロトタイプシステム「SelPh」の概観.ガイド機能を持つスライダや,推定の確証度に基づく適応的な振る舞いの変更など,self-reinforcementを利用したいくつかのユーザ支援機能を有している.

Abstract

Color enhancement is a very important aspect of photo editing. Even when photographers have tens of or hundreds of photographs, they must enhance each photo one by one by manually tweaking sliders in software such as brightness and contrast, because automatic color enhancement is not always satisfactory for them. To support this repetitive manual task, we present self-reinforcing color enhancement, where the system implicitly and progressively learns the user's preferences by training on their photo editing history. The more photos the user enhances, the more effectively the system supports the user. We present a working prototype system called SelPh, and then describe the algorithms used to perform the self-reinforcement. We conduct a user study to investigate how photographers would use a self-reinforcing system to enhance a collection of photos. The results indicate that the participants were satisfied with the proposed system and strongly agreed that the self-reinforcing approach is preferable to the traditional workflow.

Videos

Quick Preview Video (0:30)

Main Video (3:15)

Download

Publication

Yuki Koyama, Daisuke Sakamoto, and Takeo Igarashi. 2016. SelPh: Progressive Learning and Support of Manual Photo Color Enhancement. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16), pp.2520--2532.

Slide/Presentation

Contact

小山 裕己 - koyama@is.s.u-tokyo.ac.jp

News

Links

Acknowledgments

Yuki Koyama is funded by JSPS research fellowship. This work was supported by JSPS KAKENHI Grant Number 26-8574, 26240027.