Abstract
In one aspect, there is disclosed a device for imaging a body. In one arrangement, the device comprises: a controller; storage storing electronic program instructions for controlling the controller; a display for displaying a user interface; and an input means. In one form, the controller is operable, under control of the electronic program instructions, to: receive input via the input means, the input comprising a first representation of the body; process the first representation; generate a second representation of the body on the basis of processing of the first representation; and display the generated second representation via the display.
Claims
- A device for imaging a body, the device comprising: a controller; storage storing electronic program instructions for controlling the controller; a display for displaying a user interface; and an input means; wherein the controller is operable, under control of the electronic program instructions, to: receive input via the input means, the input comprising a first representation of the body; process the first representation; generate a second representation of the body on the basis of processing of the first representation; and display the generated second represention via the display.
- A device according to claim 1, wherein the controller is operable, under control of the electronic program instructions, to: process the first representation of the body by segmenting the first representation of the body to obtain a plurality of silhouettes which represent in simple form, projected shadows of a substantially true three dimensional scan of the body; and generate the second representation of the body on the basis of the silhouettes.
- A device according to claim 1 or claim 2, wherein the controller is operable, under control of the electronic program instructions, to: WO 2016/086266 PCT/AU2015/000736 process the first representation of the body by segmenting the first representation of the body to obtain a plurality of silhouettes which represent in simple form, projected shadows of a substantially true three dimensional scan of the body; and generate the second representation of the body on the basis of the silhouettes and thousands of known human shapes learned offline using intelligent machine learning techniques.
- A device according to claim 2 or claim 3, wherein the controller is also operable, under control of the electronic program instructions, to: generate a user-specific skeleton that will appear on the display once the input is received; and, during the process of segmenting the first representation, enable the user to align the body in the first representation with the user-specific skeleton.
- A device according to claim 3 or claim 4, wherein the controller is also operable, under control of the electronic program instructions, to: instruct the user via audable sounds/words/speech to align parts of the body to the displayed user-specific skeleton, wherein the electronic program instructions are operable to control the alignment process by errors calculated between characteristics including shape, pose, spatiotemporal featrues that are extracted from the generated skeleton and the body's real time captured image(s).
- A device according to claim 4 or claim 5, wherein the controller is also operable, under control of the electronic program instructions, to: instruct the user via audable sounds/words/speech to align parts of the body to the displayed user-specific skeleton, wherein the electronic program instructions are operable to control the alignment WO 2016/086266 PCT/AU2015/000736 process by errors calculated between characteristics and various data including shape appearance and varitation/variation features, pose features, spatiotemporal features that are extracted from the generated skeleton and the body's real time captured image(s).
- A device according to claim 5 or claim 6, wherein the controller is also operable, under control of the electronic program instructions, to: calculate on the basis of user height information submitted, image size (image height and width in pixels), and using blob analysis of binary images, projection theories and camera models, the following: initial estimates of intrinsic and extrinsic parameters of the capturing camera which includes camera position and orientation in each image, defined as pose P; and, initial estimates of joint kinematics of a skeletal model representing a skeleton of the body, defined as JK, including 3D position and 3D orientation of each joint of the skeletal model.
- A device according to claim 5, wherein the controller is also operable, under control of the electronic program instructions, to: predict on the basis of the user height and weight information submitted, or the user height information only, an initial on-average avatar, defined as Av, which varies with the user's entered height, weight or other body measurements if known; and, rig the on-average avatar Av to a reference skeleton of size N-joints with known skeletal model JK in a reference pose, and a bone weight/height matrix defined as W.
- A device according to claim 8, wherein the matrix W is calculated offline just once during a learning process of the prediction process, then saved together with the reference skeletal model JK to be used WO 2016/086266 PCT/AU2015/000736 for prediction or generation of other avatars, the purpose of W being to constrain, control and model the relationship between joints, bones and the actual 3D avatar surface represented by its vertices V, edges E and faces F.
- A device according to claim 9, wherein the process of predicting the intitial on-average avatar Av follows a sophisticated multivariate-based machine learning approach.
- A device according to claim 10, wherein the multivariate-based machine learning approach comprises offline machine intelligence learning human shapes using 3D features extracted from a plurality of rigged (rendered) three dimensional scans of real humans (males and females) of different ages and poses.
- A device according to claim 10 or claim 11, wherein the multivariate based machine learning approach comprises offline machine intelligence learning human shapes using 3D features extracted from a plurality of rigged and renderedthree dimensional scans of real humans (males and females) of different ages and poses.
- A device according to claim 12, wherein the multivariate-based machine learning approach further comprises the machine intelligence learning various statistical relationships between different body measurements defined as vector M = (ml, m2, ..., mL) with L number of different measurements wherein, in use, one or more measurements can be predicted given one or more different measurements and an on-average avatar Av can be predicted given one, or more of these measurements.
- A device according to claim 13, wherein in order to deform or simply animate an avatar to a new avatar defined as Avi of the body as represented in a new first representation, the reference or on-average avatar data (V, E, F, JK, W) and a known or an estimate of the user joint kinematics defined as JK1 of the new first representation are fed WO 2016/086266 PCT/AU2015/000736 to a cost function that optimizes and deforms Av to AvI subject to a number of physical constraints known or learned from natural human motion whererin, in use, the new animated avatar Av1 with same body measurement as the on-average avatar Av is a function of the reference or on-average data, i.e. AvI = f(Av, W, JK, JK1).
- A device according to claim 14, wherein a function is derived that combines two weighted energy minimization functions: a surface smoothness function utilizing Laplacian cotangent matrix which uses V, F and E; and, a bone attachment function which uses V,F, and W to ensure that the correspondence is constrained between the avatar vertices and its skeletal structure.
- A device according to any one of the preceding claims, wherein the input comprises details of the body.
- A device according to claim 1, wherein the input comprises a classification of the body, and the controller is operable, under control of the electronic program instructions, to: on the basis of the classification of the body, obtain data corresponding to the body classification; process the first representation by comparing the first representation and the obtained data; and generate the second representation of the body on the basis of the comparison.
- A device according to claim 17, wherein the first representation of the body includes the classification of the body. WO 2016/086266 PCT/AU2015/000736
- A device according to claim 17 or 18, wherein the data may be obtained by one more of retrieving, receiving, extracting, and identifying it, from one or more sources.
- A device according to any one of claims 17 to 19, wherein the obtained data comprises at least one of: a template; an earlier representation of the body; and an integration of, or of data of or associated with, one or more earlier representations of the body, and/or other bodies.
- A device according to any one of the preceding claims, wherein the body is at least one of: a human body, or one or more parts thereof; a living thing, or one or more parts thereof; a non-living thing, or one or more parts thereof.
- A device according to claim 21, when dependent on any one of claims to 8, wherein when the body is a human body, it is classified according to anthropometry.
- A device according to claim 20, further comprising a plurality of templates, each template having associated with it template data including a three dimensional model of a human body with standard mean anthropometry measurements.
- A device according to claim 23, wherein the measurements are for one or more measurements of: sex; size; weight; height; age; and ethnic groups' variations.
- A device according to any one of the preceding claims, wherein the input means comprises one or more sensors.
- A device according to claim 25, wherein the one or more sensors are part of a set of sensors, the set of sensors comprising one or more of: a motion sensor; an infra-red sensor; a depth sensor; a three dimensional imaging sensor; an inertial sensor; a Micro Electromechanical (MEMS) sensor; an imaging means; an WO 2016/086266 PCT/AU2015/000736 acceleration sensor; an orientation sensor; a direction sensor; and a position sensor.
- A device according to any one of the preceding claims, wherein the first representation comprises one or more visual representations of the body.
- A device according to claim 27, when dependent on claim 25 or 26, wherein the one or more sensors comprises an imaging means operable to capture the one or more visual representations of the body.
- A device according to claim 28, wherein the one or more sensors comprises an orientation sensor operable to provide orientation data for use during capture of the one or more visual representations of the body to facilitate alignment thereof to a plane for increased accuracy.
- A device according to any one of claims 27 to 29, wherein the one or more visual representations of the body include at least one photograph of a front view of the body and at least one photograph of a side view of the body.
- A device according to claim 30, wherein the photographs comprise at least one of: standard two dimensional (2D) binary, gray or color images; depth images with or without colors and/or textures; a complete three dimensional (3D) point cloud or a number of incomplete point clouds of the body with or without colors and/or texture; and/or a three dimensional (3D) mesh of the body with or without colors and/or texture.
- A device according to any one of claims 27 to 31, when dependent on claim 5, wherein the controller is further operable, under control of the electronic program instructions, to: WO 2016/086266 PCT/AU2015/000736 segment at least one foreground comprising the body of one or more visual representations of the body of the first representation; convert the one or more segmented foregrounds of the one or more visual representations of the first representation into respective silhouettes; use the one or more segmented forgrounds and their respective silhouettes to construct a hull of a shape of the body, and/or extract features and/or extract measurements of key points; and use one or more of the hull, and/or features, and/or key point measurements to one or more of modify, rig, and morph a 3D model of a body (an average body model) of the selected template to create a modified subject-specific 3D model image being the second representation.
- A device according to claim 32, wherein in the case of depth images, point clouds and meshes, any with or without colors and/or textures, the controller is operable, under control of the electronic program instructions, to reconstruct a three dimensional subject-specific shape of the body.
- A device according to claim 27, wherein the controller is further operable, under control of the electronic program instructions, to: delete the one or more visual representations of the first representation.
- A method for imaging a body, the method comprising: storing electronic program instructions for controlling a controller; and controlling the controller via the electronic program instructions, to: receive an input via an input means, the input comprising a first representation of the body; process the first representation; and WO 2016/086266 PCT/AU2015/000736 generate a second representation of the body on the basis of theprocessing of the first representation.
- A method according to claim 35, further comprising communicating the generated second representation.
- A method according to claim 36, wherein the communicating comprises displaying the generated second representation via the display.
- A method according to any one of claims 35 to 37, further comprising controlling the controller via the electronic program instructions, to: process the first representation of the body by segmenting the first representation of the body to obtain a plurality of silhouettes which represent in simple form, projected shadows of a substantially true three dimensional scan of the body; and generate the second representation of the body on the basis of the silhouettes.
- A method according to claim 38, further comprising controlling the controller via the electronic program instructions, to: generate a user-specific skeleton that will appear on the display once the input is received; and, during the process of segmenting the first representation, enable the user to align the body in the first representation with the user-specific skeleton.
- A method according to claim 39,wherein the step of enabling the user includes instructing the user via audable sounds/words/speech to align parts of the body to the displayed user-specific skeleton, wherein the electronic program instructions are operable to control the alignment process by errors calculated between characteristics WO 2016/086266 PCT/AU2015/000736 including shape, pose, spatiotemporal featrues that are extracted from the generated skeleton and the body's real time captured image(s).
- A method according to claim 40, further comprising controlling the controller via the electronic program instructions, to: calculate on the basis of user height information submitted, image size (image height and width in pixels), and using blob analysis of binary images, projection theories and camera models, the following: initial estimates of intrinsic and extrinsic parameters of the capturing camera which includes camera position and orientation in each image, defined as pose P; and, initial estimates of joint kinematics of a skeletal model representing a skeleton of the body, defined as JK, including 3D position and 3D orientation of each joint of the skeletal model.
- A method according to claim 41, further comprising controlling the controller via the electronic program instructions, to: predict on the basis of the user height and weight information submitted, or the user height information only, an initial on-average avatar, defined as Av, which varies with the user's entered height, weight or other body measurements if known; and, rig the on-average avatar Av to a reference skeleton of size N-joints with known skeletal model JK in a reference pose, and a bone weight/height matrix defined as W.
- A method according to claim 42, wherein the matrix W is calculated offline just once during a learning process of the prediction process, then saved together with the reference skeletal model JK to be used for prediction or generation of other avatars, the purpose of W being to constrain, control and model the relationship between joints, bones WO 2016/086266 PCT/AU2015/000736 and the actual 3D avatar surface represented by its vertices V, edges E and faces F.
- A method according to claim 43, wherein the process of predicting the intitial on-average avatar Av follows a sophisticated multivariate-based machine learning approach.
- A method according to claim 44, wherein the multivariate-based machine learning approach comprises offline machine intelligence learning human shapes using 3D features extracted from a plurality of rigged (rendered) three dimensional scans of real humans (males and females) of different ages and poses.
- A method according to claim 45, wherein the multivariate-based machine learning approach further comprises the machine intelligence learning various statistical relationships between different body measurements defined as vector M = (ml, m2, ..., mL) with L number of different measurements wherein, in use, one or more measurements can be predicted given one or more different measurements and an on-average avatar Av can be predicted given one, or more of these measurements.
- A method according to claim 46, wherein in order to deform or simply animate an avatar to a new avatar defined as Avi of the body as represented in a new first representation, the reference or on-average avatar data (V, E, F, JK, W) and a known or an estimate of the user joint kinematics defined as JK1 of the new first representation are fed to a cost function that optimizes and deforms Av to Av1 subject to a number of physical constraints known or learned from natural human motion whererin, in use, the new animated avatar Av1 with same body measurement as the on-average avatar Av is a function of the reference or on-average data, i.e. Av1 = f(Av, W, JK, JK1).
- A method according to claim 47, wherein a function is derived that combines two weighted energy minimization functions: WO 2016/086266 PCT/AU2015/000736 a surface smoothness function utilizing Laplacian cotangent matrix which uses V, F and E; and, a bone attachment function which uses V,F, and W to ensure that the correspondence is constrained between the avatar vertices and its skeletal structure.
- A method according to any one of claims 35 to 37, wherein the input comprises a classification of the body, further comprising controlling the controller via the electronic program instructions, to: on the basis of the classification of the body, obtain data corresponding to the body classification; process the first representation by comparing the first representation and the obtained data; and generate the second representation of the body on the basis of the comparison.
- A computer-readable storage medium on which is stored instructions that, when executed by a computing means, causes the computing means to perform the method in accordance with any one of claims to 49.
- A computing means programmed to carry out the method in accordance with any one of claims 35 to 49.
- A data signal including at least one instruction being capable of being received and interpreted by a computing system, wherein the instruction implements the method in accordance with any one of claims 35 to 49.
- A system for imaging a body comprising a device according to any one of claims 1 to 34. WO 2016/086266 PCT/AU2015/000736
- A method for achieving an objective, the method comprising using a device according to any one of claims 1 to 34 to generate and display one or more second representations of a body via the display to provide motivation for achieving the objective.
- A method according to claim 54, wherein the body is a human body, and the objective comprises a personal fitness goal for the human body.
Applicants
-
Myfiziq Ltd
Explore more patents:
Inventors
-
Iscoe Katherine
Explore more patents:
-
Bosanac Vlado
Explore more patents:
-
El-sallam Amar
Explore more patents:
Download PDF
Document Preview
Document History
- Publication: Jun 1, 2017
-
Application:
Dec 4, 2015
AU 2015/358289 A
-
Priority:
Dec 4, 2015
AU 2015/000736 W
-
Priority:
Dec 5, 2014
AU 2014/904940 A