U.S. Patent Attorneys in New Jersey & New York
New York City: 212-316-0381 New Jersey: 973-685-5280 What's App: Click Here to Call E-Mail: firm@patentlawny.com

Methods and devices for smart shopping (Tech Patents and Software Patents)

Patent no: 10,026,116
Issued: July 17, 2018
Inventor: Zohar , et al.
Attorney: Michael Feigin

Abstract

There are provided methods and devices for improving a shopping experience of a user, including methods and devices for creating, updating, and maintaining a list such as a shopping list and methods and devices for automatically identifying a suitable substitute to a user selected product.

Claims

The invention claimed is:

1. A method for creating and updating at least one of a list and a database with respect to at least one stock item, the method comprising: using an image capturing element, capturing at least one image of a stock item in a vicinity of said image capturing element; analyzing said at least one image to identify features of said stock item; uniquely identifying said stock item based at least on said identified features; tracking motion of at least one of said stock item, another object, and a hand, to detect at least one user gesture; interpreting said at least one detected user gesture to identify an action associated with said gesture, said action relating to at least one of an update to a list of objects and a change in a display associated with said list of objects; and based on said interpreting, carrying out said action, wherein at least one of said identifying said stock item and said interpreting said at least one detected user gesture is based on a combination of at least two of the following: specific gesture information identifying gestures associated with specific user actions; user specific information relating to gesture nuances of a specific user and to preferences of said specific user; segment specific information associated with a segment of users including said specific user, said segment specific information relating to at least one of gestures and preferences of users in said user-segment; and object specific information relating to physical characteristics of said stock item, and wherein at least one of said user specific information, said segment specific information, and said object specific information, is obtained using machine learning techniques.

2. The method of claim 1, further comprising automatically learning said user-specific information over time, and wherein said user-specific information comprises at least one of: information regarding a purchase history of said user, information regarding a list history of said user, information regarding speech of said user, information regarding one or more segments of users with which said user is associated, and information relating to user-specific aspects when triggering said image capturing element to capture said at least one image, said user-specific triggering aspects including at least one of: a distance of said user from said image capturing element at a time of said triggering; a triggering gesture used by said user at the time of said triggering; a speed of said triggering gesture; timing of said triggering gesture; a duration for which said user is in said vicinity of said image capturing element for the purpose of said triggering; characteristics of a holding pattern in which said user holds said stock item during said triggering; a tendency of said user to trigger action of a device associated with said image capturing element using a vocal command; and characteristics of a sequence of actions carried out by said user to trigger action of said image capturing device.

3. The method of claim 1, wherein said capturing said at least one image further comprises automatically triggering said image capturing element to exit a sleeping mode and to capture said at least one image, said automatically triggering comprising: using at least one sensor, scanning said vicinity of said image capturing element to identify a user-specific motion pattern in said vicinity of said image capturing element; and triggering said image capturing element upon identification of said user-specific motion pattern.

4. The method of claim 1, wherein said capturing said at least one image further comprises automatically triggering said image capturing element to capture said at least one image, said automatically triggering comprising recognizing at least one predetermined triggering gesture performed by said user, and said user-specific information comprises user-specific nuances of said at least one predetermined triggering gesture.

5. The method of claim 1, wherein said capturing said at least one image further comprises automatically triggering said image capturing element to capture said at least one image, said automatically triggering comprising: analyzing behavior of said user to identify a specific action which the user wishes to carry out; and activating specific components of a device associated with said image capturing element suited for carrying out said identified specific action.

6. The method of claim 1, also comprising illuminating said stock item during said capturing said at least one image using backlighting of a display functionally associated with said image capturing element, wherein said at least one image captured by said image capturing element comprises a plurality of images, and said using backlighting comprises using said backlighting of said display to illuminate said stock item in a controlled fashion so as to illuminate said stock item from different angles thereby to generate different shadow patterns in different ones of said plurality of images.

7. The method of claim 1, also comprising associating each user with at least one said user-segment prior to said interpreting, and automatically learning said segment-specific information over time.

8. The method of claim 1, wherein said tracking motion comprises at least one of: identifying in an image signature of said stock item a three dimensional area having at least one strong spatial gradient, and tracking said area to identify a trajectory of motion of said stock item; and extracting a plurality of measurements of local features distributed at different locations of said at least one image of said stock item, and tracking said local features to identify a trajectory of motion of said stock item.

9. The method of claim 1, wherein said interpreting said user gesture comprises at least one of: using said user-specific information to identify user-specific nuances of a gesture associated with a specific said action corresponding to said tracked motion; and using said user-specific information and information regarding at least one physical-feature of said stock item to identify a user-specific gesture, suitable for an object having said at least one physical feature, associated with a specific said action corresponding to said tracked motion.

10. The method of claim 1, wherein said interpreting is also based on device-specific information relating to users of a specific device including said image capturing element, which device-specific information is learned over time.

11. The method of claim 1, wherein if no action associated with said detected user gesture is identified, said method also comprises: obtaining additional input regarding said detected gesture; characterizing aspects of said detected gesture; identifying whether said detected gesture is a repeated gesture; if said detected gesture is not identified as a repeated gesture, storing said detected gesture as a potential gesture; and if said detected gesture is identified as a repeated gesture: identifying at least one of whether said detected gesture is user dependent and whether said detected gesture is package dependent; associating an action with said gesture; and storing said detected gesture and said action associated therewith based on said identified dependence.

12. The method of claim 1, wherein when said analyzing said at least one image does not uniquely identify said stock item, said uniquely identifying comprises: based on said analyzing said at least one image, identifying a plurality of possible stock items which may be included in said at least one image; assigning a confidence score to each of said plurality of possible stock items; using at least one of said user specific information, said segment specific information, and said object specific information for each of the possible stock items, updating said confidence score for each of said plurality of possible stock items; and based on the confidence scores determining which of the plurality of possible stock items is most likely to be said stock item in said at least one image.

13. The method of claim 12, wherein said uniquely identifying further includes, if said confidence score is below a predetermined threshold, receiving from the user additional input uniquely identifying said stock item in said at least one image.

14. The method of claim 1, further comprising, receiving a voice command from said user for at least one of updating said list of objects and changing said display associated with said list of objects, said voice command specifically identifying said stock item.

15. A device for creating or updating at least one of a list and a database with respect to at least one stock item, the device comprising: an image capturing element configured to capture at least one image of a stock item in a vicinity of said image capturing element; and an object identifier functionally associated with said image capturing element and configured to analyze said at least one image captured by said image capturing element, to identify features of said stock item, and to uniquely identify said stock item based on first obtained information including at least said identified features; a motion identifier configured to track motion of at least one of said stock item, another object, and a hand to detect at least one user gesture; a gesture interpreter, functionally associated with said motion identifier, configured to interpret said at least one detected user gesture based on second obtained information to identify an action associated with said gesture, said action relating to at least one of an update to a list of objects and a change in a display associated with said list of objects, at least one of said first obtained information and said second obtained information including a combination of at least two of the following: specific gesture information identifying gestures associated with specific user actions; user specific information relating to gesture nuances of a specific user and to preferences of said specific user; segment specific information associated with a segment of users including said specific user, said segment specific information relating to at least one of gestures and preferences of users in said user-segment; and object specific information relating to physical characteristics of said stock item; an information learner, functionally associated with said gesture interpreter and configured to learn at least one of said user specific information, said object specific information, and said object specific information using machine learning techniques; and an action module functionally associated with said gesture interpreter and configured, based on said interpretation of said gesture interpreter, to carry out said action associated with said gesture.

16. The device of claim 15, wherein said information learner is configured to automatically learn said user-specific information which relates to gestures and preferences of a specific user over time and to store said learned user-specific information, wherein said information learner is configured to learn at least one of: information regarding a purchase history of said user, information regarding a list history of said user, information regarding speech of said user, information regarding one or more segments of users with which said user is associated, and information relating to user-specific aspects when triggering said image capturing element to capture said at least one image, said user-specific triggering aspects including at least one of: a distance of said user from said image capturing element at a time of triggering said image capturing element; a triggering gesture used by said user at said time of said triggering; a speed of said triggering gesture; timing of said triggering gesture; a duration at which said user is in said vicinity of said device for the purpose of said triggering; characteristics of a holding pattern in which said user holds said stock item during triggering; a tendency of said user to trigger action of said device using a vocal command; and characteristics of a sequence of actions carried out by said user to trigger action of said device.

17. The device of claim 15, wherein said information learner is configured to associate each user with at least one user-segment and to automatically learn segment-specific information relating to preferences of users in said user-segment over time.

18. The device of claim 15, wherein said motion identifier is configured to at least one of: identify in an image signature of said stock item a three dimensional area having at least one strong spatial gradient, and to track said area thereby to identify a trajectory of said tracked motion; and extract a plurality of measurements of local features distributed at different locations of said image of said stock item, and to track said local features thereby to identify a trajectory of said tracked motion.

19. The device of claim 15, wherein said gesture interpreter is configured to at least one of: use said user-specific information to identify user-specific nuances of a gesture associated with a specific said action corresponding to said tracked motion; and use said user-specific information and information regarding at least one physical-feature of said stock item to identify a user-specific gesture, suitable for an object having said at least one physical feature, associated with a specific said action corresponding to said tracked motion.

20. The device of claim 15, wherein if said gesture interpreter does not identify any action associated with said detected user gesture, said gesture interpreter is configured to: obtain additional input regarding said detected gesture; characterize aspects of said detected gesture; identify whether said gesture is a repeated gesture; if said gesture is not identified as a repeated gesture, store said gesture as a potential gesture; and if said gesture is identified as a repeated gesture: identify at least one of whether said gesture is user dependent and whether said gesture is package dependent; associate an action with the repeated gesture; and store said gesture and the action associated therewith based on said identified dependence.

Description

FIELD AND BACKGROUND OF THE INVENTION

The invention, in some embodiments, relates to the field of retail shopping, and more particularly to methods and devices for improving the shopping experience of a user, both when shopping online and when shopping at a physical retail venue.

In many computerized shopping applications currently available in the market, products can be uniquely identified, for example by identifying the Stock Keeping Unit (SKU) of the product, based on extraction of visual features from the package of the product, typically using computer vision and/or image processing methods. However, in existing products, the database containing all the product images is built manually by an operator. In order to keep the database up to date, each change in the packaging of a product must be manually entered by the operator, which often causes inaccuracies due to update backlog or to the operator being unaware of changes to the packaging.

Various devices and products exist, in which hand movements and object movements are translated into commands, for example using computer vision and/or image processing methods. However, these devices typically require use of a specific object, such as a specific remote control, or may recognize a limited number of objects identified by the user during initial setup.

Many shopping applications existing today include data mining or data analysis technologies designed for product matching, such that they can identify products bought together, or bought by a single user, and make suggestions to other users based on such identification. Additionally, product comparison applications exist, particularly for grocery products, which identify, and offer to the user to purchase, a similar product having a better price, or one which is considered healthier. However, these applications do not take into consideration the specific user's preferences, and therefore often suggest grossly irrelevant products to a user, causing the user to waste time reviewing irrelevant suggestions rather than saving the user's time.

Naturally, all shopping applications receive input from the user as to the desired products. Existing speech recognition algorithms and techniques allow users to vocally input information, and of course also recognize terms relating to groceries and other products for which the user may shop. However, shopping applications existing today do not translate the vocal input recognized by speech recognition mechanisms to identification of a specific product, making it difficult and inefficient to receive the user's input in the form of a vocal command.

SUMMARY OF THE INVENTION

Some embodiments of the invention relate to methods and devices for creating a list, such as a list of groceries or of products to be purchased.

According to an aspect of some embodiments of the invention there is provided a method for creating and updating at least one of a list and a database, the method comprising:

triggering an image capturing element to capture at least one image of an object in a vicinity of the image capturing element;

analyzing the at least one image to identify features of the object;

uniquely identifying the object based at least on the identified features;

tracking motion of at least one of the object, another object, and a hand, to detect at least one user gesture;

interpreting the at least one detected user gesture at least based on user-specific information relating to gestures and preferences of a specific user to identify an action associated with the gesture, the action relating to at least one of an update to a list of objects and a change in a display associated with the list of objects; and

based on the interpreting, carrying out the action,

wherein the user-specific information is learned over time.

In some embodiments, the object comprises a grocery product, and the list comprises a groceries list. In some embodiments the object comprises a retail product, and the list comprises a shopping list. For example, the product may comprise a book, an office supply product, a health care product, a pharmaceutical product, a beauty care product, an electronics product, a media product, an industrial warehouse item, a service sector warehouse item, and any other suitable retail product.

In some embodiments, the object comprises a stock item stocked by a retail venue, and the list comprises a stocking list of the venue. The stock item may be any suitable stock item, such as, for example, electronics, media products, office supplies, books, pharmaceuticals and health care products, grocery products, and beauty products.

The user-specific information may be any suitable user-specific information. That being said, in some embodiments the user-specific information comprises information regarding a purchase history of the user, information regarding a list history of the user, information regarding gestures of the user, information regarding speech of the user, such as information regarding diction or an accent, and information regarding one or more segments of users with which the user is associated.

In some embodiments the triggering comprises manually triggering the image capturing element. In some embodiments the triggering comprises automatically triggering the image capturing element.

In some embodiments, the automatically triggering comprises using at least one sensor, scanning the vicinity of the image capturing element to identify at least one of an object and a triggering event, and triggering said image capturing element upon identification an object and/or a triggering event in the vicinity of the image capturing element.

In some embodiments, the at least one sensor comprises a proximity sensor, and the triggering event comprises a user or an object being at a predetermined proximity to the image capturing element for a predetermined duration.

In some embodiments, the at least one sensor comprises a barcode reader and the triggering event comprises identification of a barcode present in the vicinity of the image capturing element for a predetermined duration. In some such embodiments, the triggering event comprises identification of a specific motion pattern of the barcode in the vicinity of the image capturing element. In some such embodiments, the specific motion pattern of the barcode is user-specific and is learned over time as part of the user-specific information.

In some embodiments, the at least one sensor comprises a Quick Response (QR) code reader and the triggering event comprises identification of a QR code present in the vicinity of the image capturing element for a predetermined duration. In some such embodiments, the triggering event comprises identification of a specific motion pattern of the QR code in the vicinity of the image capturing element. In some such embodiments, the specific motion pattern of the QR code is user-specific and is learned over time as part of the user-specific information.

In some embodiments, the at least one sensor comprises a motion sensor, and the triggering event comprises identification of motion in the vicinity of the image capturing element. In some such embodiments, the triggering event comprises identification of a specific motion pattern in the vicinity of the image capturing element. In some such embodiments, the specific motion pattern is a user-specific motion pattern and is learned over time as part of the user-specific information.

In some embodiments, the user-specific motion pattern forms part of a repertoire of motion patterns associated with a device including the image capturing element, for example when multiple users use the same device.

In some embodiments, the at least one sensor comprises a microphone or other voice sensor and the triggering event comprises identification of a trigger sound, trigger word, or trigger phrase sounded in the vicinity of the image capturing element.

In some embodiments, the at least one sensor comprises an RFID sensor and the triggering event comprises identification of an RFID tag in the vicinity of the image capturing element.

In some embodiments, the at least one sensor comprises a three dimensional sensor and the triggering event comprises identification of a three dimensional object in the vicinity of the image capturing element. In some such embodiments, the three dimensional sensor is aided by illumination of the object using structured light.

In some embodiments, the user-specific information comprises information relating to user-specific triggering aspects, including one or more of:

a distance of the user from the image capturing element at the time of triggering;

a triggering gesture used by the user at the time of triggering;

a speed of the triggering gesture;

timing of the triggering gesture;

a duration for which the user is in the vicinity of the image capturing element for the purpose of triggering;

characteristics of a holding pattern in which the user holds the object during triggering;

a tendency of the user to trigger action of a device associated with the image capturing element using a vocal command; and

characteristics of a sequence of actions carried out by the user to trigger action of the device.

In some embodiments, the automatically triggering comprises recognizing at least one predetermined triggering gesture performed by the user, and the user-specific information comprises user-specific nuances of the at least one predetermined triggering gesture. In some embodiments the triggering comprises analyzing behavior of the user to identify a specific action which the user wishes to carry out and activating specific components of a device associated with the image capturing element, which components are suited for carrying out the identified specific action.

In some embodiments, the automatically triggering comprises, using the image capturing element, capturing at least one triggering image at a trigger imaging rate, and identifying at least one of an object and a triggering event in the at least one triggering image, thereby to trigger capturing of the at least one image. The trigger imaging rate may be any suitable imaging rate. That being said, in some embodiments the trigger imaging rate is not more than 10 images per second, not more than 5 images per second, not more than 2 images per second, or not more than one image per second, so as to conserve energy while an object is not in the vicinity of the image capturing element.

In some embodiments, the at least one triggering image comprises a low quality image, such as a black and white image or a low resolution image.

In some embodiments, the identifying an object in the at least one triggering image comprises identifying a boundary of an object in the at least one triggering image. In some such embodiments, the identifying an object also comprises eliminating background information from the at least one triggering image prior to identifying the boundary.

In some embodiments, the identifying an object in the at least one triggering image comprises analyzing at least one triggering image to identify a three dimensional structure of the object in the at least one triggering image.

In some embodiments, the identifying an object in the at least one triggering image comprises identifying at least one visual feature of the object in the at least one triggering image. In some such embodiments the at least one visual feature comprises at least one of the presence of writing on the object, the presence of graphics on the object, coloring of the object, the presence of watermarks on the object, and/or the three dimensional structure of the object.

In some embodiments, the identifying a triggering event in the at least one triggering image comprises comparing at least two of the triggering images to identify motion of the object in the vicinity of the image capturing element. In some such embodiments, the identifying a triggering event comprises identifying a specific motion pattern in vicinity of the image capturing element in the at least two triggering images. In some such embodiments, the specific motion pattern is user-specific and is learned over time as part of the user-specific information.

In some embodiments, the triggering also comprises interrupting a software program or application previously running on a device including the image capturing element, to enable capturing of the image and processing thereof by the device.

In some embodiments, the triggering comprises managing availability of computational resources for at least one of analyzing the at least one image, uniquely identifying the object, tracking motion, interpreting the detected user gesture, and carrying out the action, by activating the computational resources based on data obtained during the triggering. In some such embodiments, the managing availability comprises, if a triggering event is not definitively identified, activating computational resources configured to determine whether a triggering event has occurred.

In some embodiments, the triggering comprises identifying a change of object in the vicinity of the image capturing element, and triggering the image capturing element to capture at least one image of the newly provided object.

In some embodiments, the method also comprises illuminating the object during capturing the at least one image of the object by the image capturing element. In some such embodiments, the illuminating comprises illuminating the object using a dedicated illumination source. In some embodiments, the illuminating comprises illuminating the object using monochromatic illumination. In some embodiments, the illuminating comprises illuminating the object using polychromatic illumination.

In some embodiments, the illuminating comprises illuminating the object using backlighting of a display associated with the image capturing element. In some such embodiments, the at least one image captured by the image capturing element comprises a plurality of images, and the using backlighting comprises using the backlighting of the display to illuminate the object in a controlled fashion so as to illuminate the object from different angles, thereby to generate different shadow patterns in different ones of the plurality of images.

In some embodiments, using the backlighting to illuminate the object in a controlled fashion comprises using the backlighting to illuminate the object with patterned monochromatic illumination. For example, illuminating with patterned monochromatic illumination may include initially illuminating the object with blue light, subsequently illuminating the object with green light, and then illuminating the object with red light.

In some embodiments illuminating the object comprises illuminating the object in at least one of a scattered illumination pattern and a structured illumination pattern.

The vicinity of the image capturing element may be of any suitable radius or distance. That being said, in some embodiments, the vicinity of the image capturing element in which the at least one image is captured is user-specific and is learned over time as part of the user-specific information.

In some embodiments, analyzing the at least one image comprises identifying visual features of the object in the at least one image. In some embodiments, analyzing the at least one image comprises virtually combining a plurality of images of the object captured by the image capturing element and identifying the visual features in the virtually combined image. In some such embodiments the visual features include at least one of an image printed on the object, coloring of the object, text or lettering printed on the object, watermarks on the object, and other graphic forms on the object, both visible to the human eye and invisible to the human eye.

In some embodiments, analyzing the at least one image comprises identifying unique object characteristics in the at least one image. In some such embodiments the unique object characteristics comprise at least one of a barcode and a QR code.

In some embodiments, analyzing the at least one image comprises identifying a three dimensional structure of the object in the at least one image. In some such embodiments, the at least one image comprises at least two images, which are combined to identify a three dimensional structure of the object. In some such embodiments, shadow patterns in the at least two images are used to identify the three dimensional structure of the object. The shadow patterns in the at least two images may be caused naturally, or may be generated by illumination with structured light and/or with scattered light.

In some embodiments, uniquely identifying the object comprises finding in an object-feature database an object entry including at least some of the identified features of the object. In some embodiments, uniquely identifying the object comprises finding in an object-feature database an object entry including all of the identified features of the object.

In some embodiments, the uniquely identifying the object comprises uniquely identifying the object based on one or more of the user-specific information and information relating to users of a specific device including the image capturing element. In some embodiments, the method also comprises associating each user with at least one user-segment, and the uniquely identifying the object comprises uniquely identifying the object also based on segment-specific information relating to at least one of gestures and preferences of users in the user-segment, the segment-specific information being learned over time. In some embodiments, the interpreting is also based on segment-specific information.

In some embodiments, the uniquely identifying the object comprises uniquely identifying the object based on input provided by the user via an input entry element. In some embodiments, when analyzing the at least one image does not identify a sufficient number of features for uniquely identifying the object, the uniquely identifying comprises using at least one of input captured during the capturing of the image and input provided by the user via an input entry element, uniquely identifying the object, and following the uniquely identifying the object based on the input, updating an entry for the object in an object-feature database.

In some embodiments, the user input is provided by detection of motion of the object, as described hereinbelow. In some such embodiments, the method also comprises learning from the input provided by the user additional characteristics of the user to be included in the user-specific information.

In some embodiments, the method also comprises following unique identification of the object using the user input, updating an entry of the object in the object-feature database. For example, if user input was required due to a change in the object packaging which changed some of the object features, the database may be updated with features of the new packaging.

In some embodiments the method also comprises, following unique identification of the object, rendering a virtual model of the object on a display functionally associated with the image capturing element, and/or displaying information regarding the object and/or the list on the display. In some such embodiments the method also comprises providing an indication of the action on the display. In some embodiments, providing an indication of the action comprises providing an animation of the action on the display.

In some embodiments, tracking motion of the object comprises analyzing the at least one image of the object captured by the image capturing element, the analyzing comprising:

using the unique identification of the object, extracting from an object-feature database a three dimensional structure of the object; and

using the extracted three dimensional structure, tracking the object to identify a trajectory of motion thereof.

In some embodiments, the tracking motion comprises identifying in an image signature of the object a three dimensional area having at least one strong spatial gradient, and tracking the area to identify a trajectory of motion of the object. In some embodiments, the tracking motion comprises extracting a plurality of measurements of local features distributed at different locations of the at least one image of the object, and tracking the local features to identify a trajectory of motion of the object.

In some embodiments, interpreting the user gesture comprises using the user-specific information to identify a user-specific gesture associated with a specific action corresponding to the tracked motion.

As mentioned above, in some embodiments, each user is associated with at least one user-segment, for example a segment of children, of females, or of elderly people. In some such embodiments, interpreting the user gesture is also based on information relating to the user-segment for the specific user. In some embodiments, the user is associated with a segment based on predefined characteristics of the user, such as sex, age, and the like. In some embodiments the segment with which the user is associated is learned over time, for example based on the user's list history or based on the types of objects the user presents to the image capturing element. In some embodiments the information relating the user-segment, such as objects used by the user-segment or preferences of users in the user-segment, is learned over time.

In some embodiments, interpreting the user gesture comprises using at least one of the user-specific information and information regarding at least one physical-feature of the object to identify a user-specific gesture associated with a specific action corresponding to the tracked motion.

In some embodiments, each object is associated with at least one object-segment, for example a segment of heavy objects, of light objects, of fragile objects, or of perishable objects. In some such embodiments, interpreting the user gesture is also based on information relating to the object-segment for the identified object, with respect to all users or with respect to a specific user.

In some embodiments, the at least one physical feature of the object comprises at least one of a weight of the object, dimensions of the object, and a three dimensional shape of the object. For example, the interpretation of the same gesture may be different if the user is holding a heavy object or if the user is holding a light object.

In some embodiments, the interpreting is also based on device-specific information relating to users of a specific device including the image capturing element, which device-specific information is learned over time.

In some embodiments, the action comprises at least one of:

adding a specific number of occurrences of the object to the list;

removing a specific number of occurrences of the object from the list;

displaying at least one object that can be used as a substitute for the identified object;

displaying information relating to the identified object;

displaying the list;

replacing the object in the list by a substitute object;

searching in a database for a specific object;

searching in a database for an object which is similar to the identified object;

filtering the list by a suitable criterion, such as by an object feature;

sorting the list according to a suitable order, such as popularity, relevance, size, location in a store, and the like;

displaying a subset of objects, for example only objects that have previously been purchased by the user;

displaying information relating to an object history of the user; and

requesting help or support.

In some embodiments, each action type is associated with a different user gesture. In some embodiments, for a specific user, each user gesture is associated with a single action type.

In some embodiments, the object comprises a single unit of a multi-unit object packaging, and the uniquely identifying also comprises using the unique identification of the object, uniquely identifying a multi-unit object packaging associated with the object. In some such embodiments, carrying out the action comprises carrying out the action with respect to the multi-unit object packaging.

In some embodiments, the method also comprises receiving a voice command for at least one of updating the list of objects and changing the display associated with the list of objects. A detailed explanation as to how an object is identified using the voice command is provided hereinbelow. Once the object is identified, for example with a high enough confidence level, as described hereinbelow, the action is automatically carried out with respect to the identified object, and the user is presented with an option to "undo" this action.

In some embodiments, if no action associated with the detected user gesture is identified, the method also comprises:

obtaining additional input regarding the detected gesture;

characterizing aspects of the detected gesture;

identifying whether the gesture is a repeated gesture;

if the gesture is not identified as a repeated gesture, storing the gesture as a potential gesture; and

if the gesture is identified as a repeated gesture: identifying at least one of whether the gesture is user dependent and whether the gesture is package dependent; associating an action with the gesture; and storing the gesture and the action associated therewith based on the identified user dependence and/or package dependence.

In accordance with an aspect of some embodiments of the invention there is provided a method for learning a user-specific gesture, comprising:

obtaining a detected user gesture not having an identified action associated with the gesture;

obtaining additional input regarding the detected gesture;

characterizing aspects of the detected gesture;

identifying whether the gesture is a repeated gesture;

if the gesture is not identified as a repeated gesture, storing the gesture as a potential gesture; and

if the gesture is identified as a repeated gesture: identifying at least one of whether the gesture is user dependent and whether the gesture is package dependent; associating an action with the gesture; and storing the gesture and the action associated therewith based on the identified user dependence and/or package dependence.

In some embodiments, obtaining the additional input comprises receiving additional input from the user. In some such embodiments, receiving the additional input comprises receiving from the user a vocal command corresponding to the unidentified gesture. In some such embodiments, receiving the additional input comprises the user interacting with an input entry element to select a desired action to be carried out.

In some embodiments, obtaining the additional input comprises obtaining segment-specific input relating to a user-segment with which the user is associated. For example, if the user is associated with a segment of elderly people, the gesture may be better identified based on characteristics of that segment.

In some embodiments, the characterizing comprises characterizing at least one of a trajectory of the gesture, a pattern of motion when performing the gesture, angles at which the gesture is performed, and distances of motion when performing the gesture.

In some embodiments, identifying whether the gesture is a repeated gesture comprises identifying if the user repeats the gesture shortly after detection of the gesture. In some embodiments, identifying whether the gesture is a repeated gesture comprises identifying that the gesture was stored as a potential gesture.

In some embodiments, identifying whether the gesture is a repeated gesture comprises identifying that the repeated gesture does not reflect an intention of the user to carry out an action.

In some embodiments, associating an action with the gesture comprises identifying an action that follows a repeated user gesture more than a predetermined number or percentage of times, and associating the identified action with the gesture. In some embodiments, associating the action with the gesture is carried out manually by the user or by an operator of the user information database.

In some embodiments, at least one of the analyzing, uniquely identifying, interpreting, and carrying out the action is carried out at a server located remotely to the image capturing element. In some such embodiments, the server is functionally associated with the object-feature database and/or with the user information database. In some such embodiments the method also comprises transmitting the images captured by the image capturing element to the server. In some such embodiments the method also comprises transmitting the detected user gesture to the server.

In some embodiments, at least one of the analyzing, uniquely identifying, interpreting, and carrying out the action is carried out locally to the image capturing element.

In some embodiments, the method also comprises following the unique identification of the object, displaying at least one of information relating to the identified object, a virtual model of the identified object, and the list, on a display associated with the image capturing element.

The method for creating a list described herein may be carried out using any suitable device. That being said, according to an aspect of some embodiments of the invention there is provided a device for creating and updating a list or a database, the device comprising:

an information learner configured to learn user-specific information which relates to gestures and preferences of a specific user over time and to store the learned user-specific information;

a triggering module configured to identify a triggering event;

an image capturing element, functionally associated with the triggering module, and configured to be triggered by the triggering module, following identification of a the triggering event, to capture at least one image of an object in a vicinity of the image capturing element; and

an object identifier functionally associated with the image capturing element and configured to analyze the at least one image captured by the image capturing element, to identify features of the object, and to uniquely identify the object based at least on the identified features;

a motion identifier configured to track motion of at least one of the object, another object, and a hand to detect at least one user gesture;

a gesture interpreter, functionally associated with the motion identifier and with the information learner, configured to interpret the at least one detected user gesture based at least on the user-specific information to identify an action associated with the gesture, the action relating to at least one of an update to a list of objects and a change in a display associated with the list of objects; and

an action module functionally associated with the gesture interpreter and configured, based on the interpretation of the gesture interpreter, to carry out the action associated with the gesture.

In some embodiments the information learner is also configured to learn, over time, object-specific information which relates to characteristics of the object and/or segment-specific information which relates to characteristics and objects associated with or used by a segment of users.

In some embodiments, the object comprises a grocery product, and the list comprises a groceries list. In some embodiments the object comprises a retail product, and the list comprises a shopping list. For example, the product may comprise a book, an office supply product, a health care product, a pharmaceutical product, a beauty care product, an electronics product, a media product, an industrial warehouse item, a service sector warehouse item, and any other suitable retail product.

In some embodiments, the object comprises a stock item stocked by a retail venue, and the list comprises a stocking list of the venue. The stock item may be any suitable stock item, such as, for example, electronics, media products, office supplies, books, pharmaceuticals and health care products, grocery products, and beauty products.

In some embodiments, the information learner is configured to learn at least one of information regarding a purchase history of the user, information regarding a list history of the user, information regarding gestures of the user, information regarding speech of the user, such as information regarding diction or an accent, and information regarding one or more segments of users with which the user is associated.

In some embodiments the information learner is functionally associated with a user information database and is configured to store the learned information in the user information database.

In some embodiments the triggering module is configured to identify, as the triggering event, a user manually triggering the image capturing element. In some embodiments the triggering module is configured to automatically identify a triggering event and to trigger the image capturing element.

In some embodiments, the triggering module comprises at least one sensor, which sensor is configured to scan the vicinity of the image capturing element to identify at least one of an object and a triggering event, and the triggering module is configured to trigger the image capturing element upon identification of the object and/or the triggering event in the vicinity of the image capturing element.

In some embodiments, the at least one sensor comprises a proximity sensor configured to identify a user or an object being at a predetermined proximity to the image capturing element for a predetermined duration as the triggering event.

In some embodiments, the at least one sensor comprises a barcode reader configured to identify a barcode present in the vicinity of the image capturing element for a predetermined duration as the triggering event. In some such embodiments, the triggering event comprises identification of a specific motion pattern of the barcode in the vicinity of the image capturing element. In some such embodiments, the specific motion pattern of the barcode is user-specific and is learned over time as part of the user-specific information.

In some embodiments, the at least one sensor comprises a Quick Response (QR) code reader configured to identify a QR code present in the vicinity of the image capturing element for a predetermined duration as the triggering event. In some such embodiments, the triggering event comprises identification of a specific motion pattern of the QR code in the vicinity of the image capturing element. In some such embodiments, the specific motion pattern of the QR code is user-specific and is learned over time as part of the user-specific information.

In some embodiments, the at least one sensor comprises a motion sensor, configured to identify motion in the vicinity of the image capturing element as the triggering event. In some such embodiments, the motion sensor is configured to identify a specific motion pattern in the vicinity of the image capturing element as the triggering event. In some such embodiments, the motion sensor is functionally associated with the information learner, and the specific motion pattern is user-specific and comprises part of the user-specific information.

In some embodiments, the user-specific motion pattern forms part of a repertoire of motion patterns learned by the information learner and associated with a specific device. For example, when the device is placed in a household, the information learner learns a repertoire of motion patterns suited for all members of the household.

In some embodiments, the at least one sensor comprises a microphone or other voice sensor configured to identify a trigger sound, trigger word, or trigger phrase sounded in the vicinity of the image capturing element as the triggering event.

In some embodiments, the at least one sensor comprises an RFID sensor configured to identify an RFID tag in the vicinity of the image capturing element as the triggering event.

In some embodiments, the at least one sensor comprises a three dimensional sensor configured to identify a three dimensional object in the vicinity of the image capturing element as the triggering event. In some such embodiments, the three dimensional sensor is aided by illumination of the object using structured light.

In some embodiments, the information learner is configured to learn information relating to user-specific triggering aspects, the user-specific triggering aspects including at least one of:

a distance of the user from the image capturing element at a time of triggering the image capturing element by the triggering module;

a triggering gesture used by the user at the time of triggering;

a speed of the triggering gesture;

timing of the triggering gesture;

a duration at which the user is in the vicinity of the device for the purpose of triggering;

characteristics of a holding pattern in which the user holds the object during triggering;

a tendency of the user to trigger action of the device using a vocal command; and

characteristics of a sequence of actions carried out by the user to trigger action of the device.

In some embodiments, the triggering module is configured to recognize at least one predetermined triggering gesture performed by the user, and the information learner is configured to learn user-specific nuances of the at least one predetermined triggering gesture.

In some embodiments, the triggering module is configured to analyze behavior of the user to identify a specific the action which the user wishes to carry out, and activate specific components of the device, which components are suited for carrying out the identified specific action.

In some embodiments, the image capturing element is configured to capture at least one triggering image at a trigger imaging rate, and the triggering module is configured to identify at least one of an object and a triggering event in the at least one triggering image captured by the image capturing element as the triggering event. The trigger imaging rate may be any suitable imaging rate. That being said, in some embodiments the trigger imaging rate is not more than 10 images per second, not more than 5 images per second, not more than 2 images per second, or not more than one image per second, so as to conserve energy while an object is not in the vicinity of the image capturing element.

In some embodiments, the image capturing element is configured to capture a low quality image as the at least one triggering image, such as a black and white image or an image in a low resolution.

In some embodiments, the triggering module is configured to identify an object in the at least one triggering image by identifying a boundary of an object in the at least one triggering image. In some such embodiments, the triggering module is also configured to eliminate background information from the at least one triggering image prior to identifying the boundary.

In some embodiments, the triggering module is configured to identify a three dimensional structure of the object in the at least one triggering image, thereby to identify a triggering event.

In some embodiments, the triggering module is configured to identify an object in the at least one triggering image by identifying at least one visual feature of the object in the at least one triggering image. In some such embodiments the triggering module is configured to identify at least one of the presence of writing on the object, the presence of graphics on the object, coloring of the object, and/or the presence of watermarks on the object.

In some embodiments, the at least one triggering image comprises at least two triggering images, and the triggering module is configured to identify a triggering event in the at least to triggering image by comparing the triggering images to identify motion of the object in the vicinity of the image capturing element. In some such embodiments, the triggering module is configured to identify a specific motion pattern in vicinity of the image capturing element in the at least two triggering images. In some such embodiments, the triggering module is configured to identify a user-specific motion pattern which is learned over time by the information learner as part of the user-specific information.

In some embodiments, the triggering module is also configured to interrupt a software program or application previously running on the device.

In some embodiments, the triggering module is configured to manage availability of computational resources for at least one of the information learner, the object identifier, the motion identifier, the gesture interpreter, and the action module, by activating the computational resources based on data obtained during triggering of the image capturing element. In some embodiments, the triggering module is configured, if a triggering event is not definitively identified, to activate computational resources configured to determine whether a triggering event has occurred.

In some embodiments, the triggering module is configured to identify a change of object in the vicinity of the image capturing element, and to trigger the image capturing element to capture at least one image of the newly provided object.

In some embodiments, the device also comprises an illumination source configured to illuminate the object during the image capturing. In some such embodiments, the illumination source is configured to emit monochromatic illumination. In some such embodiments, the illumination source is configured to emit polychromatic illumination. In some such embodiments, the illumination source is configured to illuminate the object in at least one of a structured illumination pattern and a scattered illumination pattern.

In some embodiments, the illumination source comprises backlighting of a display associated with the device. In some embodiments, the backlighting of the display is configured to illuminate the object in a controlled fashion so as to illuminate the object from different angles, thereby to generate different shadow patterns in different ones of the plurality of images.

In some embodiments, the backlighting of the display is configured to illuminate the object with patterned monochromatic illumination. For example, the display backlighting may initially illuminate the object with blue light, subsequently illuminate the object with green light, and then illuminate the object with red light.

The triggering module is configured to identify a triggering event in any suitable radius or distance from the image capturing element. That being said, in some embodiments, the information leaner is configured to learn the magnitude of the vicinity of the device in which the at least one image is captured for a specific user over time, as part of the user-specific information.

In some embodiments, the object identifier is configured to identify visual features of the object in the at least one image. In some embodiments, the object identifier is configured to virtually combine a plurality of images of the object captured by the image capturing element and to identify the visual features in the virtually combined image. In some such embodiments, the object identifier is configured to identify at least one of an image printed on the object, coloring of the object, text or lettering printed on the object, watermarks on the object, and other graphic forms on the object, both visible to the human eye and invisible to the human eye.

In some embodiments, the object identifier is configured to identify unique object characteristics in the at least one image. In some such embodiments the object identifier is configured to identify at least one of a barcode and a QR code as the unique object characteristics.

In some embodiments, the object identifier is configured to identify a three dimensional structure of the object in the at least one image. In some such embodiments, the at least one image comprises at least two images, and the object identifier is configured to combine the at least two images and to identify a three dimensional structure of the object in the combined image. In some such embodiments, the object identifier is configured to use shadow patterns in the at least one image to identify the three dimensional structure of the object. In some such embodiments, the shadow patterns are natural. In some embodiments, the shadow patterns in the at least one image are generated by illumination of the object with structured light and/or with scattered light.

In some embodiments, the object identifier is functionally associated with an object-feature database and is configured to uniquely identify the object by finding in the object-feature database an object entry including at least some of the identified features of the object. In some embodiments, the object identifier is configured to uniquely identify the object by finding in an object-feature database an object entry including all of the identified features of the object.

In some embodiments, the object identifier is configured to uniquely identify the object also based on at least one of the user-specific information and information relating to users of a specific device. For example, the object identifier may identify an orientation in which the user is holding the object and thereby narrow the possible identifications of the object.

In some embodiments, information learner is configured to associate each user with at least one user-segment, and to learn segment-specific information relating to at least one of gestures and preferences of users in the user-segment over time, and the object identifier is configured to uniquely identify the object also based on the segment-specific information.

For example, the information learner may learn, for example from objects previously identified for a specific user, that the specific user is a vegetarian, and subsequently the object identifier can narrow the possible identifications of the object only to objects suitable for vegetarians.

In some embodiments, the device also comprises an input entry element configured to receive input from the user, and the object identifier is configured to uniquely identify the object also based on the input provided by the user via the input entry element.

In some embodiments, the input entry element comprises the image capturing element and the input comprises motion of the object along a specific trajectory, as described hereinbelow. In some such embodiments, the information learner is also configured to learn from the input provided by the user additional characteristics of the user to be included in the user-specific information.

In some embodiments, the object identifier is configured, following unique identification of the object using the user input, to update an entry for the object in the object-feature database. For example, if user input was required due to a change in the object packaging which changed some of the object features, the object identifier may update the database with features of the new packaging.

In some embodiments, the object identifier does not identify a sufficient number of features for uniquely identifying the object, and is configured to use at least one of input captured during capturing of the image and input provided by the user via the input entry element to uniquely identify the object, and to update an entry for the object in the object-feature database following the unique identification of the object based on the input.

In some embodiments, the motion identifier is configured to use the unique identification of the object to extract from the object-feature database a three dimensional structure of the object, and to use the extracted three dimensional structure to track the object in at least two images captured by image capturing element, thereby to identify a trajectory of motion of the object.

In some embodiments, the motion identifier is configured to identify in an image signature of the object a three dimensional area having at least one strong spatial gradient, and to track the area thereby to identify a trajectory of motion of the object. In some embodiments, the motion identifier is configured to extract a plurality of measurements of local features distributed at different locations of the image of the object, and to track the local features thereby to identify a trajectory of motion of the object.

In some embodiments, the user gesture interpreter is functionally associated with information learner and is configured to use the user-specific information to identify a user-specific gesture associated with a specific action corresponding to the identified trajectory of motion.

As mentioned above, in some embodiments, each user is associated with at least one user-segment, for example a segment of children, of females, or of elderly people. In some such embodiments, the user gesture interpreter is configured to interpret the user gesture also based on information relating to the user-segment for the specific user. In some embodiments, the user is associated with a segment based on predefined characteristics of the user, such as sex, age, and the like. In some embodiments the segment with which the user is associated is learned over time, for example based on the user's list history or based on the types of objects the user presents to the image capturing element. In some embodiments, the information relating to the user-segment, such as objects used by users or preferences of users in the user-segment, is learned over time.

In some embodiments, the user gesture interpreter is configured to use the user-specific information and/or information regarding at least one physical-feature of the object to identify a user-specific gesture associated with a specific action corresponding to the identified trajectory of motion.

In some embodiments, each object is associated with at least one object-segment, for example a segment of heavy objects, of light objects, of fragile objects, or of perishable objects. In some such embodiments, the user gesture interpreter is configured to interpret the user gesture is also based on information relating to the object-segment for the identified object, with respect to all users or with respect to a specific user.

In some embodiments, the at least one physical feature of the object comprises at least one of a weight of the object, dimensions of the object, and a three dimensional shape of the object. For example, the interpretation of the same gesture may be different if the user is holding a heavy object or if the user is holding a light object.

In some embodiments, the information learner is configured to learn device-specific information relating to users of a specific device over time, and wherein the gesture interpreter is configured to interpret the gesture also based on the device-specific information.

In some embodiments, the user gesture interpreter is configured to identify an action comprising at least one of:

adding a specific number of occurrences of the object to the list;

removing a specific number of occurrences of the object from the list;

displaying at least one object that can be used as a substitute for the identified object;

displaying information relating to the identified object;

displaying the list;

replacing the object in the list by a substitute object;

searching in a database for a specific object;

searching in a database for an object which is similar to the identified object;

filtering the list by a suitable criterion, such as by an object feature;

sorting the list according to a suitable order, such as popularity, relevance, size, location in a store, and the like;

displaying a subset of objects, for example only objects that have previously been purchased by the user;

displaying information relating to an object history of the user; and

requesting help or support.

In some embodiments, each action type is associated with a different user gesture. In some embodiments, for a specific user, each user gesture is associated with a single action type.

In some embodiments, the object comprises a single unit of a multi-unit object packaging, and the object identifier is configured to use the unique identification of the object to uniquely identify the multi-unit object packaging associated with the object. In some such embodiments, the action module is configured to carry out the action identified by the user gesture interpreter with respect to the multi-unit object packaging.

In some embodiments, the device also comprises a voice sensor, such as a microphone, configured to receive a voice command for at least one of updating the list of objects and changing the display associated with the list of objects. A detailed explanation as to how an object is identified using the voice command is provided hereinbelow.

In some embodiments, if the gesture interpreter is not able to identify an action associated with the detected user gesture, the gesture interpreter is also configured to:

obtain additional input regarding the detected gesture;

characterize aspects of the detected gesture;

identify whether the gesture is a repeated gesture;

if the gesture is not identified as a repeated gesture, store the gesture as a potential gesture; and

if the gesture is identified as a repeated gesture: identify at least one of whether the gesture is user dependent and whether the gesture is package dependent; associate an action with the repeated gesture; and store the gesture and the action associated therewith based on the identified dependence.

In some embodiments, the gesture interpreter is configured to obtain input relating to the object as the additional input. In some embodiments, the gesture interpreter is configured to receive additional input from the user. In some such embodiments, the gesture interpreter is configured to receive from the user a vocal command corresponding to the unidentified gesture. In some such embodiments, the gesture interpreter is configured to receive input obtained by the user interacting with an input entry element to select a desired action to be carried out.

In some embodiments, the gesture interpreter is configured to obtain segment-specific input relating to a user-segment with which the user is associated. For example, if the user is associated with a segment of elderly people, the gesture may be better identified based on characteristics of that segment.

In some embodiments, the gesture interpreter is configured to characterize at least one of a trajectory of the gesture, a pattern of motion when performing the gesture, angles at which the gesture is performed, and distances of motion when performing the gesture

In some embodiments, the gesture interpreter is configured to identify whether the gesture is a repeated gesture by identifying if the user repeats the gesture shortly after detection of the gesture. In some embodiments, the gesture interpreter is configured to identify whether the gesture is a repeated gesture by identifying that the gesture was stored as a potential gesture.

In some embodiments, the gesture interpreter is configured to identify that the repeated gesture does not reflect an intention of the user to carry out an action.

In some embodiments, the gesture interpreter is configured identify an action that follows a repeated user gesture more than a predetermined number or percentage of times, and to associate the identified action with the repeated user gesture.

In some embodiments, at least one of the information learner, the object identifier, the gesture interpreter, and the action module are located at a server remote from the image capturing element. In some such embodiments, the device also comprises a transceiver configured to transmit the captured images and/or the detected user gesture to the remote server, and to receive computation output from the remote server. In some embodiments, the user information database and/or the object-feature database are local to the device. In some embodiments, the user information database and/or the object-feature database are remote from the device and are functionally associated therewith.

In some embodiments, the device also comprises a display, functionally associated with the object identifier, and the object identifier is configured, following unique identification of the object, to render an image or a model of the identified object on the display, and/or to display information regarding the object and/or the list on the display. In some embodiments, the display is also functionally associated with the action module, and upon carrying out of the action by the action module an indication of the action is rendered on the display. In some such embodiments, the indication of the action is rendered on the display by providing an animation of the action on the display.

Some embodiments of the invention relate to methods and devices for identifying a suitable product for use by a user, such as a substitute product or a specific product based on a non-specific designation of the product.

According to an aspect of some embodiments of the invention there is provided a method for identifying a suitable product for a user, the method comprising:

obtaining a product dataset comprising a group of products, the products being divided into subgroups according to title, wherein each product is associated with at least one of a brand and a set of features describing the product, and wherein a weight is associated with the brand and with each feature;

receiving from a user an initial identification of a desired product having a specific title associated therewith;

using information in the product dataset and at least one of user-specific information and device-specific information, uniquely identifying a specific desired product intended by the user in the initial identification;

using at least some of the weights of the brand and of the features, computing a distance between the specific desired product and at least two other products in the specific title; and

identifying at least one of the other products, having a small distance from the specific desired product, as a suitable product for the user.

The group of products may be any suitable group of products. That being said, in some embodiments the group of products comprises grocery products, electronics, books, pharmaceutical products, health care products, beauty care products, manufacturing products, agricultural products, games, gaming products, toys, clothing, shoes, entertainment products such as plays, concerts, and movies, vehicles, such as cars, motorcycles, and yachts, and the like.

In some embodiments, the title comprises the natural name of a product. Exemplary titles may include, "milk", "fresh produce", "frozen vegetables", "children's books", "non-fiction books", and the like. Typically, each title has a plurality of products associated therewith. For example, fat free milk, low fat milk, whole milk, lactose free milk, and soy milk, are all associated with the title "milk".

In some embodiments, the brand relates to a manufacturer or distributor of the product. As such, in some embodiments, many products share a single brand. For example, the brand "Kit-Kat" may be associated with the products "Kit-Kat, 36-Count" and "KIT KAT CHUNKY Peanut Butter 48 g". In some embodiments, a single product may be associated with more than one brands, for example products associated with the brand "Kit-Kat" may also be associated with the brand "Nestle".

The features associated with a product may be any suitable features which describe the product, and may include, for example, flavor, nutritional identifications such as "diet", "low fat", "sugar free", "gluten free", and "lactose free", denominational identifications such as "vegetarian", "vegan", "Kosher", and "Halal", price, size of packaging, and the like. Typically, each feature is associated with a set of possible values which it may receive.

In some embodiments, obtaining the product dataset comprises, for each product, automatically identifying the product's title, brand, and features, and automatically building an entry in the product dataset using at least one of keywords in the product name, keywords in the product description, keywords found on the packaging of the product, and information gleaned from external sources, such as manufacturer and distributor websites. In some embodiments in which the product comprises a food product, building the entry additionally uses information gleaned from nutritional values of the product, and information gleaned from the list of ingredients of the product.

In some embodiments, the dataset may be automatically obtained at suitable locations. For example, in a supermarket, images obtained by security cameras observing the checkout points may be correlated with barcode and other information registered by the cashier during checkout, and each product identified this way may be added to the dataset or updated within the dataset. In such cases OCR may be used to extract brand and feature information from the captured image of the package.

In some embodiments, a human operator oversees the dataset creation, and may approve the information collected for each product and/or may add other information for each product. In some such embodiments, the human operator may also identify mistakes in the creation of the dataset, such as associating a product with the wrong title, and may use machine learning techniques to "teach" the system how to avoid such mistakes.

As described in further detail hereinbelow, in some embodiments, the weights associated with the brand and with the features of each product are user-specific. In some embodiments, the user-specific weights are manually determined by user input. In some such embodiments, the user-specific weights are learned over time, for example based on choices the user makes after being offered the choice of two or more suitable products, or based on the user's product history. As an example, if the user's product history shows that when selecting a product the user is oblivious to the color of the product, the weight of the "color" feature is automatically lowered for that product, for that title, or for all products, with respect to the user.

Similarly, the user may specify, or the system may learn, a filtering criterion for the user. For example, the user may specify that he is vegetarian, or the system may learn from the user's product history that the user only purchases vegetarian products, and may then increase the weight of the "vegetarian" feature so that vegetarian products are more likely to, and in some embodiments only vegetarian products will, be selected as suitable products for the user.

In some embodiments, the weights associated with the brand and with the features of each product are segment-specific. In some such embodiments, each user is associated with one or more user-segments, and user-specific weights are assigned to the brand and/or to the features based on the user-segment with which the user is associated. For example, in the context of food products, the user may be associated with a segment of vegetarians, and suitable weights are assigned to the brand and to the features of products for users in that segment, for example giving a weight of zero for each product containing meat. In some such embodiments, assigning the weights comprises aggregating information relating to product history and substitu Back to patents

transparent gif
transparent gif