T 1280/18 (Managing data/SAMSUNG) 15-12-2022
Download and more information:
METHOD AND APPARATUS FOR MANAGING DATA
I. The appeal lies from the decision of the examining division to refuse European patent application No. 15174165.9.
The contested decision cited, inter alia, the following document:
D1: US 2011/0304774 A1, published on 15 December 2011
The examining division decided that neither claim 1 of the main request nor that of the first auxiliary request was clear (Article 84 EPC), and that they lacked an inventive step over document D1 (Article 56 EPC). It also held that claim 1 of the first auxiliary request did not meet the requirements of Article 123(2) EPC.
II. In its statement of grounds of appeal, the appellant requested that the decision under appeal be set aside and that a patent be granted on the basis of the claims of a sole new request submitted with the statement of grounds of appeal.
III. In a communication annexed to a summons to oral proceedings, the board expressed its doubt that the amendments made to claim 1 had a basis in the application as originally filed (Article 123(2) EPC). It also set out its preliminary opinion that the objection under Article 84 EPC raised by the examining division seemed to have been overcome but that claim 1 of the sole request was still not inventive (Article 56 EPC).
IV. In response to the summons, the appellant informed the board that neither the applicant nor its representative would be attending the scheduled oral proceedings. The oral proceedings were thus cancelled.
V. Claim 1 of the sole request reads as follows (itemisation added by the board):
F1|A method for managing data in an electronic device, the method comprising: |
F2|detecting (1403) a request for tagging a data record; |
F3|selecting (1405) a portion of the data record in response to the request; |
F4|identifying a spoken word or a voiceprint in the selected portion of the data record; |
F5|acquiring a content item based on the selected portion of the data record; and |
F6|associating the content item with the data record, |
F7|wherein the content item includes an image corresponding to a meaning of the spoken word or an image corresponding to information regarding a user corresponding to the voiceprint, and|
F8|wherein the image is acquired among images pre-stored in at least one of the electronic device or a web server. |
Application
1. The application relates to a method for tagging a portion of a data record such as a text file or an audio file with a content item such as an image.
Main request - Article 84 EPC
2. Claim 1 of the sole pending request corresponds to claim 1 of the main request considered in the contested decision but with the addition of the text "an image corresponding to" before "information regarding a user corresponding to the voiceprint" (see feature F7).
3. This amendment overcomes the Article 84 EPC objection raised by the examining division (see section 1.1.1 of the contested decision).
Main request - Article 56 EPC
4. The examining division considered document D1 to be the closest prior art to the subject-matter of claim 1 of the versions of the main and first auxiliary requests then on file.
In the grounds of appeal, the appellant also considered D1 to be the closest prior art.
4.1 Document D1 discloses a computing device environment comprising a computing device 102 (for example, a video game console, a desktop or laptop computer), a display 104 (for example, a television or a monitor) and an input device 106 configured to detect user inputs. The input device 106 can be a depth-sensing camera, a video camera and/or a directional audio input device such as a directional microphone array. If the input device 106 is a depth-sensing camera, the computing device 102 may be configured to locate persons in image data acquired from a depth-sensing camera tracking, and to track motions of identified persons to determine whether any motions correspond to recognised inputs, for example the motion of two players jumping. The identification of a recognised input may trigger the automatic addition of tags associated with the recognised input to the recorded content. Likewise, if the input device 106 is a directional microphone, the computing device 102 may associate speech input with a person in the depth and/or image data via directional audio data. For example, in the case of recording the video of two players jumping, the tag "awesome double jump!" is automatically generated (see paragraphs [0009] to [0012], [0018] and [0019]; Figures 1 and 2; paragraphs [0021] to [0023] in conjunction with Figure 4A).
4.2 A recognised input might be a recognised speech segment, such as a recognised word or phrase (see paragraph [0024]). A speech-related tag may comprise, for example, text or audio versions of recognised words or phrases, metadata associating a received speech input with an identity of a user from whom the speech was received, or any other suitable information related to the content of the speech input. The speech-related tag may comprise metadata regarding a volume of the speech input, and/or any other suitable information related to audio presentation of the speech input during playback (see paragraph [0027]).
4.3 The input data is thus tagged with a "contextual tag" associated with the recognised input and the tagged data is recorded. Where the recognised input is a recognised motion input, the contextual tag may comprise text commentary related to the identified motion to be displayed during playback of a video image of the motion or may comprise searchable metadata that is not displayed during playback. As an example, if a user performs a kick motion, a metadata tag identifying the motion of a kick may be applied to the input data. As another example, a metadata tag may identify each user in a frame of image data as determined via facial recognition (see paragraph [0025]).
4.4 In document D1, a "request for tagging a data record" as defined in claim 1 is detected when a "recognised input" is detected in the user input (paragraphs [0002], [0009], [0011] and [0023] to [0025]). It follows that document D1 discloses a method according to features F1 and F2.
4.5 The recorded depth and/or video image data content of document D1 corresponds to the "data record" of claim 1. It is the target of the request for tagging (see Figures 1 and 2; paragraphs [0020] and [0021]). Thus, when considering a portion of the data record to be the complete data record or the data record for a particular time duration, document D1 also discloses feature F3.
4.6 Document D1 further discloses feature F4 in cases where the recognised input is a recognised speech segment (paragraph [0027]).
4.7 D1 discloses that where users are identifiable via facial recognition, avatars (or other "characterisations") may be generated for each user. In this manner, a computing system may produce an animated representation of recorded tagged data such that the avatar of each user talks and moves in the same manner as the user did during the recording of the scene (see paragraph [0029]). Since document D1 discloses the use of an avatar as a "content item", document D1 discloses features F5 to F7:
- This content item (i.e. the "avatar" or "animated representation of recorded tagged data") is acquired based on the selected portion of the data record (the corresponding recording of the scene with the user from which the avatar has been generated talking and moving) (see feature F5);
- The avatar is associated with the corresponding video recording of the user (see feature F6);
- The avatar includes an image corresponding to information regarding a user corresponding to the voiceprint (see feature F7).
4.7.1 The examining division has identified features F7 and F8 as non-technical distinguishing features (see section 1.1.2.5 of the decision). In its grounds of appeal, the appellant argued that document D1 did not disclose associating images with data records based on spoken words or voiceprints identified in portions of the data records, wherein the image corresponds to a meaning of the spoken word or to information regarding a user corresponding to the voiceprint and wherein the image is acquired from among pre-stored imaqes (corresponding to features F3, F4, F6, F7 and F8). For the reasons given above, however, the board is of the opinion that feature F8 is the only distinguishing feature. In particular, the board is of the opinion that the example based on users' avatars of document D1 discloses feature F7.
4.8 Acquiring the user's avatar from among avatars/images pre-stored in at least one of the electronic device or web server as defined in feature F8 is an obvious possibility for the skilled person.
The appellant has argued that by tagging the data record with a limited set of images, it was possible, for instance, to group data records associated with the same image together, thereby improving the retrieval of a determined data record for playing. However, the board notes that there is no "grouping" of data records in claim 1, nor is there any retrieval of these data records.
4.9 Therefore, claim 1 of the sole request is not inventive (Article 56 EPC).
Order
For these reasons it is decided that:
The appeal is dismissed.