Skip to main navigation Skip to search Skip to main content

Insan duruş ve yönelimlerinin derin öǧrenme ile siniflandirilmasi

Translated title of the contribution: Classification of human poses and orientations with deep learning

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Citations (Scopus)

Abstract

Within the scope of this study, we aim to classify human poses and orientations from the group activity images using deep learning. In the framework that we developed, the detection, pose and orientation classification steps are performed in a cascade fashion. Firstly, people in the images are detected, then, the detected people are classified as belonging to one of the classes 'standing', 'sitting on an object' and 'sitting on the ground' and finally classified into one of the eight different orientations of these three pose classes. To this end, an end-to-end trainable deep learning framework is used. The experimental evaluation show that the trained Convolutional Neural Network model produces successful results.

Translated title of the contributionClassification of human poses and orientations with deep learning
Original languageTurkish
Title of host publication26th IEEE Signal Processing and Communications Applications Conference, SIU 2018
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1-4
Number of pages4
ISBN (Electronic)9781538615010
DOIs
Publication statusPublished - 5 Jul 2018
Event26th IEEE Signal Processing and Communications Applications Conference, SIU 2018 - Izmir, Turkey
Duration: 2 May 20185 May 2018

Publication series

Name26th IEEE Signal Processing and Communications Applications Conference, SIU 2018

Conference

Conference26th IEEE Signal Processing and Communications Applications Conference, SIU 2018
Country/TerritoryTurkey
CityIzmir
Period2/05/185/05/18

Fingerprint

Dive into the research topics of 'Classification of human poses and orientations with deep learning'. Together they form a unique fingerprint.

Cite this