Abstract
The present disclosure is directed to methods, computer program products, and computer systems for instructing a robot to prepare a food dish by replacing the human chef's movements and actions. Monitoring a human chef is carried out in an instrumented application-specific setting, a standardized robotic kitchen in this instance, and involves using sensors and computers to watch, monitor, record and interpret the motions and actions of the human chef, in order to develop a robot-executable set of commands robust to variations and changes in the environment, capable of allowing a robotic or automated system in a robotic kitchen to prepare the same dish to the standards and quality as the dish prepared by the human chef.
Claims
-
A computer-implemented method operating on robotic apparatus, comprising:
an electronic description of one or more food dishes, including the recipes for making each food dish from ingredients by a chef;
for each food dish, sensing a sequence of observations of a chef's movements by a plurality of robotic sensors as the chef prepares the food dish using ingredients and kitchen equipment;
detecting in the sequence of observations mini-manipulations corresponding to a sequence of movements carried out in each stage of preparing a particular food dish;
transforming the sensed sequence of observations into computer readable instructions for controlling a robotic apparatus capable of performing the sequences of mini-manipulations;
storing at least the sequence of instructions for mini-manipulations to electronic media for each food dish, wherein the sequence of mini-manipulations for each food dish is stored as a respective electronic record;
transmitting the respective electronic record for a food dish to a robotic apparatus capable of replicating the sequence of stored mini-manipulations, corresponding to the original actions of the chef; and
executing the sequence of instructions for mini-manipulations for a particular food dish by the robotic apparatus, thereby obtaining substantially the same result as the original food dish prepared by the chef, wherein executing the instructions includes sensing properties of the ingredients used in preparing the food dish.
- The method of claim 1, further comprising the step of pre-programming each mini-manipulation step into a sequence of robotic sensor actions, robotic manipulation actions, and robotic adjustment actions, wherein the pre-programmed sequence of robotic actions is retrieved from electronic storage and inserted into the sequence of robotic instructions.
- The method of claim 1, between the sensing and storing steps, further comprising transforming the sensed data from a plurality of sensors corresponding to the food preparation stages into a set of computer-readable instructions for controlling the robotic apparatus, wherein the instructions determine the execution of a series of movements by the robotic apparatus in preparing the food dish.
- The method of claim 3, wherein the set of computer-readable instructions includes adjusting one or more parameters depending on the properties of ingredients associated with the food preparation for the particular food dish.
- The method of claim 4, wherein the one or more parameters comprise variables that can take numerical values or ranges of numerical values.
- The method of claim 5, wherein the one or more parameters comprise one or more variables for adjusting instructions for a particular ingredient.
- The method of claim 5, wherein the one or more parameters comprise one or more variables for adjusting instructions for a particular mini-manipulation.
- The method of claim 5, wherein the one or more parameters comprise one or more variables for adjusting instructions for a particular cooking stage.
- The method of claim 5, wherein the one or more parameters comprise one or more variables for adjusting instructions for an action primitive.
- The method of claim 5, wherein the one or more parameters comprise instruction parameters for instructing the robotic apparatus including instructions to a robotic device.
- The method of claim 5, wherein the one or more parameters comprise a plurality of chef-defined parameters.
- The method of claim 5, wherein the one or more parameters comprise a plurality of user settable parameters.
- The method of claim 4, wherein the parameter adjustments on the robotic actions are adjusted dynamically based on the sensor values of the ingredients determined by the robotic apparatus in the process of preparing the food dish.
- The method of claim 4, wherein the parameter adjustments on the equipment are by the robotic apparatus based on the parameter values for the equipment set by the chef in the original preparation of the food dish.
- The method of claim 1, wherein the executing step comprises the robotics apparatus preparing the same food dish in substantially the same amount of time as the chef prepares the original food dish.
- The method of claim 1, wherein the executing step comprises each corresponding food preparation stage in the food dish executed by the robotic apparatus has substantially the same time duration as the corresponding step in the original food preparation by the chef.
- The method of claim 3, wherein each stage comprises one or more specific mini-manipulations in the food preparation and wherein the mini-manipulations are recognized as a sequence of standardized pre-programmed robotic apparatus action primitives, wherein the action primitive sequences are inserted into the overall sequence of computer-readable instructions for controlling the robotics apparatus.
- The method of claim 3, wherein all of the specific mini-manipulations in the food preparation process are recognized as a sequence of standardized pre-programmed robotic apparatus action primitives, wherein the action primitive sequences are inserted into the overall sequence of computer-readable instructions for controlling the robotics apparatus.
-
The method of claim 1, wherein the robotic apparatus comprises:
first and second robotic arms, each arm having predetermined degrees of freedom;
first and second robotic hands, each hand having a wrist coupled to the respective robotic arm, each wrist having predetermined degrees of freedom; and
first and second robotic hands, each hand including a plurality of fingers with each finger of each corresponding hand having predetermined degrees of freedom and having at least one haptic sensor.
- The method of claim 19, wherein the predetermined degrees of freedom in each arm consist of at least six degrees of freedom.
- The method of claim 19, wherein the predetermined degrees of freedom in each wrist consist of at least two degrees of freedom.
- The method of claim 19, wherein the predetermined degrees of freedom for each finger comprise at least four degrees of freedom for each finger, each finger having up to three joints.
- The method of claim 1, wherein the robotic sensors comprise at least one video camera.
- The method of claim 1, wherein the robotic sensors comprise at least one range sensor.
- The method of claim 1, wherein the robot sensors comprise at least one haptic sensor.
- The method of claim 25, wherein the one or more haptic sensors are embedded into gloves worn by the chef while preparing each food dish.
- The method of claim 1, wherein gloves worn by the chef have embedded surface markings, the surface markings indicating the deformation of a palm surface on each glove during the chef's food preparation process.
- The method of claim 1, wherein the gloves worn by the chef have embedded magnetic sensors, the magnetic sensors measuring the deformation of a palm surface on each glove during the chef's food preparation process.
- The method of claim 1 wherein the robot sensors comprise laser-based sensors to detect orientation, distance, shape and size of objects used in at least one stage of preparing a food dish.
- The method of claim 25, wherein the haptic sensors in the gloves are located at the fingertips and on the palm of the robotic hand.
-
A computer-implemented method operating on a robotics apparatus, comprising:
providing a library of electronic descriptions of one or more food dishes, including the name of the food dish, the ingredients of the food dish and the recipes for making the food dishes from ingredients;
providing sequences of pre-programmed instructions for standardized mini-manipulations, wherein each mini-manipulation produces at least one identifiable result in a stage of preparing at least one food dish;
sensing a sequence of observations corresponding to a chef's movements by a plurality of robotic sensors as the chef prepares the food dish using ingredients and kitchen equipment;
detecting standardized mini-manipulations in the sequence of observations, wherein a mini-manipulation corresponds to one or more observations, and the sequence of mini-manipulations corresponds to the preparation of a food dish;
transforming the sequence of observations into robotic instructions based on software implemented methods for recognizing sequences of pre-programmed standardized mini-manipulations based on the sensed sequence of chef motions; the mini-manipulations each comprising a sequence of robotic instructions, and the robotic instructions including dynamic sensing operations and robotic action operations;
storing the sequence of mini-manipulations and their corresponding robotic instructions in electronic media, the sequence of instructions and corresponding mini-manipulations for each food dish being stored as a respective electronic record for preparing each food dish;
transmitting the respective electronic record for a food dish to a robotics apparatus capable of replicating and executing the sequence of robotic instructions; and
executing the robotic instructions for each particular food dish by the robotics apparatus, thereby obtaining substantially the same result as the original food dish prepared by the chef.
- The method of claim 31, further comprising setting parameters for the robotic instructions based on default values provided by the chef.
- The method of claim 31, further comprising setting parameters for the robotic instructions based on preferences provided by the user.
- The method of claim 31, further comprising adjusting parameters for the robotic instructions based on dynamic sensor observations recorded and transformed into adjusted parameters in the robotic instructions as the robotic apparatus executes its sequence of instructions.
- The method of claim 31, wherein the adjustment of parameters comprises compensating for variability in the ingredients in order to obtain essentially the same result in preparing the food dish.
- The method of claim 35, wherein the adjustment of parameters minimizes expected errors in the outcomes to maximize expected accuracy,
- The method of claim 36, wherein the cumulative error to minimize estimated by Error(C,R)=∑n=1,…nci-pimax(ci,t-pi,t and the expected resulting accuracy is estimated by: A(C,R)=1-1n∑n=1,…nci-pimax(ci,t-pi,t
- The method of claim 35, wherein the cumulative error to minimize is estimated by Error(C,R)=(∑n=1,…nαici-pimax(ci,t-pi,t) and the resulting accuracy is estimated by: A(C,R)=1-(∑n=1,…nαici-pimax(ci,t-pi,t)/∑i=1,…nαi
- The method of claim 31, further comprising of one or more machine learning mechanisms implemented in software, wherein the learning mechanisms generalize the sequence of robotic instructions for preparing the particular food dish.
- The method of claim 31, wherein the generalization of robotic instructions comprises replacing a parameter value with a range of parameter values.
- The method of claim 31, wherein the generalization of robotic instructions further provides a plurality of alternative instructions for at least one of the food preparation mini-manipulations.
- The method of claim 41, further comprising the step of evaluating the alternate robotic instructions for at least one mini-manipulation to determining a preferred mini-manipulation.
- The method of claim 31, wherein the machine learning mechanism comprises robotic case-based learning.
- The method of claim 31, wherein the machine learning mechanism comprises robotic reinforcement learning.
-
A robotics system, comprising:
a multimodal sensing system capable of observing human motions and generating human motions data in a first instrumented environment;
a computer, communicatively coupled to the multimodal sensing system, for recording the human motions data received from the multimodal sensing system and processing the human motions data to extract motion primitives; and
a robotics apparatus, communicatively coupled to the multimodal sensing system, capable of using the human motions data to replicate the observed human motions in a second instrumented environment.
- The system of claim 45, wherein the replicating comprises playing back the extracted motion primitives.
- The system of claim 45, wherein the extracted motion primitives comprise a plurality of motion primitives ranging from high-level motion primitives to low-level motion primitives.
- The system of claim 45, wherein the extracted motion primitives comprise one or more high-level motion primitives and one or more low-level motion primitives.
-
The system of claim 45, wherein the robotics system comprises:
first and second robotic arms;
first and second robotic hands, each hand having a wrist coupled to a respective arm, each hand having a plurality of articulated fingers and a palm; and
a plurality of actuators, each actuator controlling a single degree of freedom or a combination of degrees of freedom, wherein each degree of freedom comprises a relative motion of rotary elements, linear elements, or any combination thereof.
-
The system of claim 45, wherein the robotics system comprises:
first and second robotic arms;
first and second robotic hands, each hand having a wrist coupled to a respective arm, each hand having a plurality of articulated fingers and a palm; and
a plurality of actuators, each actuator controlling an individual or a combination of movable joints within the robotic arms, robotic hands, fingers, or wrists.
- The system of claim 45, wherein the first instrumented environment is the same as the second instrumented environment.
- The system of claim 45, wherein the first instrumented environment is different from the second instrumented environment.
- The system of claim 45, wherein the second instrumented environment comprises multiple pairs of robotic arms, and multiple pairs of robotic hands, the combination of multiple pairs of robotic arms and hands performing food preparation equivalent to that prepared by a multiple number of humans.
- The system of claim 45, wherein the human motions data comprises data from the observation of a human chef preparing a food dish, the extracted motion primitives including a sequence of food preparation steps.
- The system of claim 45, wherein the human motions data comprises data from the observation of an artist painting an artwork on canvas, the extracted motion primitives including a sequence of painting steps.
- The system of claim 45, wherein the human motions data comprises data from the observation of a musician playing a musical instrument, the extracted motion primitives including a sequence of musical instrument playing steps.
- The system of claim 45, wherein the computer creates a recipe script containing a sequence of high-level motion commands to implement a cooking recipe.
-
A robotic kitchen system comprising:
first and second robotic arms;
first and second robotic hands, each hand having a wrist coupled to a respective arm, each hand having a palm and multiple articulated fingers, each articulated finger on the respective hand having at least one sensor; and
first and second gloves, each glove covering the respective hand having a plurality of embedded sensors.
- The system of claim 58, further comprising a standardized robotic kitchen having a plurality of standardized kitchen equipment, standardized kitchen tools and standardized containers.
- The system of claim 58, further comprising a standardized robotic kitchen having a plurality of standardized ingredients, each ingredient having one or more properties that indicate possible variations between the same ingredients.
- The system of claim 58, wherein the one or more properties of a particular ingredient comprises the size, dimension and weight.
- The system of claim 58, wherein the same standardized robotic kitchen is used for a chef to prepare a food dish and a robotic kitchen for replicating the same food dish.
- The system of claim 58, wherein the plurality of sensors on the hands are capable of measuring distance, pressure, temperature, location, distribution and amount of force, and capturing images.
- The system of claim 58, wherein the at least one sensor on each finger of the respective hand is capable of measuring distance, pressure, temperature, location, distribution and amount of force, and capturing images.
- The system of claim 58, wherein the plurality of sensors comprise haptic sensors, pressure sensors, camera sensors, depth sensors, tactile sensors and strain sensors.
- The system of claim 58, wherein the plurality of sensors are located surrounding the surfaces and on the inside of each hand.
- The system of claim 58, wherein each robotic arm comprises a plurality of joint encoders and resolvers for measuring the position and velocity of each joint on the robotic arms, and a plurality of joint torque sensors for measuring the torque at each joint on the robotic arms.
- The system of claim 58, wherein each wrist has a six-axis force- and torque-sensor for measuring the torques and forces at the wrist.
- The system of claim 58, wherein the first and second arms and the first and second hands are capable of any combination of synchronized motions between the first and second arms and the first and second hands.
- The system of claim 58, wherein the first arm performing a first food preparation function that corresponds to a chefs movement where the chefs movement requires a greater force to perform the food preparation function.
- The system of claim 58, wherein the first hand attaches to a first kitchen tool substantially simultaneously with the first arm attaching to a second kitchen tool.
- The system of claim 58, wherein the first hand performs a first food preparation function simultaneously with the first arm performing a second food preparation function, whereby the timing of the stage is adjusted to a point in the replication process that matches the subsequent one-to-one correspondence between the robotic replication and the chefs movements.
- The system of claim 58, wherein the first hand performs a food preparation function simultaneously with the first arm performing the same food preparation function.
- The system of claim 58, wherein the first hand attaches to a first kitchen tool and the first arm attaches to a second kitchen tool.
- The system of claim 58, wherein the first and second robotic arms, the first and second robotic hands, and the plurality of sensors are made of a material that is waterproof, wide temperature-range tolerant, chemically inert and safe, and food safe.
-
A system, comprising:
a standardized kitchen module;
a plurality of multimodal sensors having a first type of sensors physically coupled to a human and a second type of sensors spaced away from the human;
the first type of sensors for measuring the posture of human appendages and sensing motion data of the human appendages; and
the second type of sensors for determining a spatial registration of the three-dimensional configurations of the environment, objects, movements, and locations of human appendages, the second type of sensors being configured to sense activity data, the standardized kitchen module having connectors to interface with the second type of sensors,
wherein the first type of sensors and the second type of sensors measure the motion data and the activity data, and send both the motion data and the activity data to a computer for storage and processing for food preparation.
- The system of claim 76, wherein the first type of sensors comprises position and velocity sensors for measuring the speed of articulated joints on human appendages.
- The system of claim 76, wherein the first type of sensors comprises distance, touch, and contact location, distribution and force sensors.
- The system of claim 76, wherein the second type of sensors comprises camera sensors, laser sensors, ultrasonic sensors, capacitive sensors, and infra-red sensors.
- The system of claim 76, wherein the second type of sensors comprises Hall-effect sensors.
- The system of claim 76, wherein the human appendages comprise a head, arms, hands and fingers.
- The system of claim 76, wherein the second type of sensors determines continuously the special registration of the three-dimensional configurations of the environment, objects, movements, and locations of human appendages.
- The system of claim 76, wherein the motion data is pre-processed before sending the motion data to a computer for storage and processing.
- The system of claim 76, wherein the activity data is pre-processed before sending the activity data to a computer for storage and processing.
- A method for food preparation by robotic apparatus, comprising replicating a recipe by preparing a food dish via the robotic apparatus, the recipe broken down into one or more food preparation stages, each food preparation stage broken down into a sequence of mini-manipulations and active primitives, each mini-manipulation broken down into a sequence of action primitives, wherein each mini-manipulation has been successfully tested to produce an optimal result for that mini-manipulation in view of the variations in positions, orientations, shapes of an applicable object, and one or more applicable ingredients.
- The method of claim 85, prior to the replicating step, further comprising capturing a chef's motions, positions and orientations of one or more objects in the standardized kitchen module, and any interaction forces between the chef's motions and a directed object.
- The method of claim 86, between the capturing step and the replicating step, further comprising converting the captured data into robotic instructions.
- The method of claim 86, wherein each mini-manipulation comprises a sequence of action primitives that accomplish a basic functional unit in obtaining a specific result in food preparation.
- The method of claim 86, wherein each action primitive comprises an indivisible building block for food preparation without functional result.
- The method claim 86, wherein each food preparation stage comprises a sequence of mini-manipulations and action primitives where the action primitives for food preparation without functional result.
- The method of claim 86, wherein the robotic apparatus comprises one or more robotic arms and one or more robotic hands.
- The method of claim 86, wherein the robotic apparatus comprises an independent robotic platform.
-
A robotic hand coated with a sensing glove, comprising:
five fingers; and
a palm connected to the five fingers, the palm having internal joints and a deformable surface material in three regions:
a first deformable region disposed on a radial side of the palm and near the base of the thumb;
a second deformable region disposed on a ulnar side of the palm, and spaced apart from the radial side; and
a third deformable region disposed on the palm and extending across the base of the fingers,
wherein the combination of the first deformable region, the second deformable region, the third deformable region, and the internal joints collectively operate to perform a mini-manipulation for food preparation.
- The robotic hand of claim 93, wherein the function of collectively operating comprises forming a shape of the palm and exerting forces via the palm to match the shape and forces of a chef's hand motions in preparing a food dish.
- The robotic hand of claim 88, wherein the function of collectively operating comprises forming an oblique palmar gutter in the palm for grasping a kitchen tool.
- The robotic hand of claim 93, wherein the function of collectively operating comprises forming a cupped palm shape for food preparation manipulation.
- The robotic hand of claim 93, wherein the first deformable material comprises thenar eminence.
- The robotic hand of claim 93, wherein the second deformable material comprises hypothenar eminence.
- The robotic hand of claim 93, wherein the first deformable region, the second deformable region, and the third deformable region comprise a deformable material that can be molded when force is applied in changing the shape of the material.
- The robotic hand of claim 93, wherein the first, second and third deformable regions comprise a soft human-skin type material including silicon material.
- The robotic hand of claim 93, wherein the first, second and third deformable regions comprise a material that deforms upon application of pressure.
- The robotic hand of claim 93, wherein the body of the palm comprises a plurality of internal joints driven by one or more actuators for configuring the palm surface to accomplish a mini-manipulation.
- The robotic hand of claim 93, wherein the body of the palm comprises a plurality of internal joints driven by one or more actuators for configuring the palm surface according to a predefined palm structure to execute a mini-manipulation.
- The robotic hand of claim 93, wherein the body of the palm comprises a plurality of internal joints driven by one or more actuators for configuring the palm surface to conform to a measured object geometry.
- The robotic hand of claim 93, wherein the configuration of the palm surface is determined by pressure sensor signals detected by one or more pressure sensors.
- The robotic hand of claim 93, wherein configuration of the palm surface is determined by a plurality of shape feature points that match to the measured object geometry.
- The robotic hand of claim 106, further comprising a sensor glove having surface markings for sensing the shape feature points.
- The robotic hand of claim 107, further comprising a sensor glove having a plurality of surface markings distributed to a plurality of regions on the palm surface.
- The apparatus of claim 107, further comprising a sensor glove having a plurality of surface markings coupled to a plurality of regions on the palm surface, the plurality of surface markings composing three groups of surface markings, the first group of surface markings disposed in the thenar eminence of the palm surface, the second group of surface markings disposed in the hypothenar eminence of the palm surface, and the third group of surface markings disposed across the base of the fingers on the palm surface.
- The apparatus of claim 107, wherein each of the surface markings having convex or concave corners to identify the location of the respective feature point.
- The apparatus of claim 110, further comprising one or more camera sensors for detecting the marks and computing a three-dimensional positioning of the shape feature points.
- The apparatus of claim 93, further comprising one or more magnetic sensors for measuring the shape feature points relative to the palm body.
- The apparatus of claim 93, further comprising a sensor glove embedded with a plurality of magnetic sensors coupled to a plurality of regions on the palm surface, the plurality of magnetic sensors composing three groups of magnetic sensors, the first group of magnetic sensors disposed in the thenar eminence of the palm surface, the second group of magnetic sensors disposed in the hypothenar eminence of the palm surface, and the third group of magnetic sensors disposed across the base of the fingers on the palm surface.
- The apparatus of claim 93, further comprising one or more magnets coupled to the palm surface and serving as a reference frame to the xyz coordinate positions of the plurality of magnetic sensors.
- The apparatus of claim 93, wherein the shape feature points on the palm surface are computed from a database library containing a predefined deformable model.
- The apparatus of claim 93, wherein the palm body has a plurality of identified marks to create a reference frame, the reference frame providing a structure for which the shape feature points are identified relative to fixed points on the palm body.
- The apparatus of claim 93, wherein each shape feature point is defined as a vector of xyz coordinate positions relative to the reference frame.
-
A computer-implemented method on a robotic apparatus, comprising:
executing a robotic cooking script for replicating a food recipe having a plurality of food preparation movements;
a computing system having a processor and memory, the system determining if each food preparation movement is identified as a standardized grabbing action of a standardized kitchen tool or a standardized object, a standardized hand-manipulation action or object, or a non-standardized object; and
for each food preparation movement, the computer system instructing the robotic cooking device to access a first database library if the food preparation movement involves a standardized grabbing action or a standardized object, the computer system instructing the robotic cooking device to access a second database library if the food preparation movement involves a standardized hand-manipulation action or object, and the computer system instructing the robotic cooking device to create a three-dimensional model of the non-standardized object if the food preparation movement involves a non-standardized object.
- The method of claim 118, wherein the first database library comprises a plurality of predefined kitchen tools, each kitchen tool being associated with a software file that includes a code for the kitchen tool, a previously stored three-dimensional model of the kitchen tool, and properties of the kitchen tool.
- The method of claim 118, wherein the second database library comprises a plurality of predefined mini hand manipulations associated with a specific food preparation task.
- The method of claim 118, wherein, for the non-standardized object, the computer system activates a plurality of sensors on a kitchen module to build a three-dimensional model of the non-standardized object to determine an optimal method for the robotic device to grab the non-standardized object.
-
A method for recipe script generation, comprising:
receiving filtered raw data from sensors in the surroundings of the standardized kitchen module;
generating a sequence of script data from the filtered raw data; and
transforming the sequence of script data into machine-readable and machine-executable commands for preparing a food dish, the machine-readable and machine-executable commands including commands for controlling a pair of robotic arms and hands to perform a function from the group consisting of one or more food preparation stages, one or more mini-manipulations, and one or more action primitives.
- The method of claim 122, after the transforming step, further comprising executing the sequence of script data for instructing the robotic arms and hands in performing mini-manipulations to prepare one or more stages of the food dish.
- The method of claim 122, wherein the executing step comprises monitoring the robotic arms, the robotic hands, the food dish, for real-time mini-manipulation adaptations to optimize obtaining substantially the same food dish.
- The method of claim 122, wherein the receiving step comprises collecting the raw data which are organized into a plurality of groups.
- The method of claim 125, wherein a first group in the plurality of groups comprises two-dimensional raw data and three-dimensional raw data sensed from one or more sensors.
- The method of claim 125, wherein a second group in the plurality of groups comprises robotic apparatus raw data sensed from one or more sensors.
- The method of claim 127, wherein the robotic apparatus raw data comprises data relating to the internal position, velocity, joints and torque of the robotic apparatus.
- The method of claim 127, wherein a third group in the plurality of groups comprises kitchen status raw data sensed from one or more sensors, the kitchen status raw data including analog data and binary data.
- The method of claim 122, wherein the generating step comprises melding processed data from a data process mapping module, a data extraction module, and a data reduction and abstraction module.
- The method of claim 130, wherein the melding comprises arranging processed data according to a time stamp and a process step with corresponding ingredients, equipment used, key cooking method, and key variables to monitored and tracked.
- The method of claim 130, wherein the data process mapping module is configured to assess the operating environment of the standardized robotic kitchen to identify the selected food preparation stages, the selected equipment, the selected kitchen tools, the selected ingredients, and the operating locations as part of preparing a particular food dish.
- The method of claim 130, wherein the data extraction and mapping module is configured to process two-dimensional raw data to extract two-dimensional image data to extract edges of objects in the image, extract color and texture of the objects in the surrounding area in the images, identify types and locations in the image, identify ingredients and equipment visible to the image, and associate with a particular food preparation stage.
- The method of claim 133, wherein the data extraction and mapping module receives processed data from the data reduction and abstraction module comprises object property information, object dimensions, and the relative three-dimensional location and orientation of the object in the standardized robotic kitchen.
- The method of claim 132, wherein the data reduction and abstraction module is configured to receive three-dimensional raw data and extract a portion of the three-dimensional data that is relevant to a specific food preparation step.
- The method of claim 135, wherein the data reduction and abstraction module is configured to process the extracted three-dimensional data, and perform computational steps including the extraction of geometric information to allow identification and matching of a particular object within the raw three-dimensional data set.
- The method of claim 131, wherein the data reduction and abstraction module comprises identifying the size, the type, the location, and the orientation of the object in the three-dimensional standardized robotic kitchen.
- The method of claim 125, wherein the melding comprising receiving information from smart appliances.
Owners (US)
-
Mbl Limited
(May 12 2016)
Explore more patents:
Applicants
-
Oleynik Mark
Explore more patents:
Inventors
-
Oleynik Mark
Explore more patents:
IPC Classifications
-
B25J9/00
Explore more patents:
Document Preview
- Publication: Oct 15, 2015
-
Application:
Feb 20, 2015
US 201514627900 A
-
Priority:
Feb 20, 2015
US 201514627900 A
-
Priority:
Feb 16, 2015
US 201562116563 P
-
Priority:
Feb 8, 2015
US 201562113516 P
-
Priority:
Jan 28, 2015
US 201562109051 P
-
Priority:
Jan 16, 2015
US 201562104680 P
-
Priority:
Dec 10, 2014
US 201462090310 P
-
Priority:
Nov 22, 2014
US 201462083195 P
-
Priority:
Oct 31, 2014
US 201462073846 P
-
Priority:
Sep 26, 2014
US 201462055799 P
-
Priority:
Sep 2, 2014
US 201462044677 P
-
Priority:
Jul 15, 2014
US 201462024948 P
-
Priority:
Jun 18, 2014
US 201462013691 P
-
Priority:
Jun 17, 2014
US 201462013502 P
-
Priority:
Jun 17, 2014
US 201462013190 P
-
Priority:
May 8, 2014
US 201461990431 P
-
Priority:
May 1, 2014
US 201461987406 P
-
Priority:
Mar 16, 2014
US 201461953930 P
-
Priority:
Feb 20, 2014
US 201461942559 P