T 1557/16 (Lost Frame concealment / HUAWEI) 04-05-2021
Download and more information:
A METHOD AND DEVICE FOR LOST FRAME CONCEALMENT
Novelty - (no)
Claims - clarity (no)
Amendments - added subject-matter (yes)
I. The present decision relates to the appeal which was filed by the applicant against the Examining Division's decision to refuse European patent application 08 757 725.
II. The Examining Division held that the claims of the sole request on file comprised subject-matter extending beyond the content of the application as filed, contrary to Article 123(2) EPC. It further held that the subject-matter of claims 1 and 4 was not clearly defined, contrary to Article 84 EPC, and that the subject-matter of claims 1-4 was not new in the sense of Article 54 EPC over the combined teaching of documents D7 and D8 (which the Examining Division considered incorporated by reference into D7):
D7: S. Ragot et al., "ITU-T G.729.1: An 8-32 kbit/s Scalable Coder Interoperable With G-729 For Wideband Telephony and Voice Over IP"; 2007 IEEE International. Conference on Acoustics, Speech, and Signal Processing; 15-20 April 2007; pages IV-529 to IV-532,
D8: "G.729-based embedded variable bit-rate coder: An 8-32 kbit/s scalable wideband coder bitstream interoperable with G.729", G.729.1 (05/2006), International Telecommunication Union, 29 May 2006
III. The subject-matter of dependent claim 5 was considered to constitute an obvious alternative to the teaching of D7, for which reason, the existence of an inventive step (Article 56 EPC) was denied.
IV. The appellant requests that the decision be set aside, and that a patent be granted based on a new set of claims 1-6, submitted with the grounds of appeal.
V. Claim 1 of the appellant's request reads:
A device implementing lost frame concealment, comprising:
a lost frame detector (501), adapted to receive voice frame data, detect whether a voice frame in the received voice frame data is lost, and generate frame loss information;
a decoding module (502), adapted to decode the received current voice frame and generate a low band signal and a high band decoded signal of the current voice frame that is produced after inverse MDCT, the length of the high band decoded signal of the current voice frame is two frames;
a low band delay module (504), adapted to set a delay time for a low band decoded signal of the current voice frame and generate a low band signal of the previous voice frame, wherein the delay time is duration of one frame;
wherein the previous voice frame is the frame before the current voice frame;
a low band signal recovering module (503), adapted to recover a low band signal of the previous voice frame when the frame loss information shows that the previous voice frame is lost;
a high band lost frame concealment module (505), adapted to receive the high band decoded signal of the current voice frame and the frame loss information and generate a high band signal of the previous voice frame; wherein the length of the high band decoded signal of the previous frame is one frame; and
wherein: generating a high band signal of the previous voice frame comprises:
judging whether a high band decoded signal of the current voice frame is received on the basis of the frame loss information;
generating, by using the high band decoded signal of the current voice frame, the high band signal of the previous voice frame, if the high band decoded signal of the current voice frame is received;
recovering the high band decoded signal of the current voice frame, if the high band decoded signal of the current voice frame is not received, the length of a recovered high band decoded signal of the current voice frame is two frames; and
generating the high band signal of the previous voice frame;
wherein a first semi-window signal of the high band decoded signal of the current voice frame is recovered, a second semi-window signal of high band decoded signal of the current voice frame is delayed, a judgment is made about whether a previous voice frame high band decoded signal is received, if the signal is received, the first semi-window signal of high band decoded signal of the current voice frame and the second semi-window signal of previous voice frame high band decoded signal are superposed to produce a high band decoded signal of the previous voice frame;
if the signal is not received, the second semi-window signal of previous voice frame high band decoded signal is recovered, after the handling, the first semi-window signal of high band decoded signal of the current voice frame and the second semi-window signal of previous voice frame high band decoded signal are superposed to produce a high band decoded signal of the previous voice frame;
a QMF synthesis filter (506), adapted to receive the low band signal of the previous voice frame generated by the low band delay module and the high band signal of the previous voice frame generated by the high band lost frame concealment module, synthetically filter the received low band signal of the previous voice frame and the received high band signal of the previous voice frame, and output a previous frame voice signal; or, receive the low band signal of the previous voice frame recovered by the low band signal recovering module and a received high band signal of the previous voice frame generated by the high band lost frame concealment module, synthetically filter the low band signal of the previous voice frame and the high band signal of the previous voice frame, and output a previous frame voice signal.
VI. Independent claim 4 defines a corresponding method for implementing lost frame concealment.
VII. According to the appellant, the new claims had been amended to address the objections raised by the Examining Division with regard to Articles 84 and 123(2) EPC. Arguments in favour of novelty and inventive step with regard to document D8 were put forward.
VIII. In this respect, the appellant stressed that document D8 did not disclose the following features of claim 1:
... wherein a first semi-window signal of the high band decoded signal of the current voice frame is recovered, a second semi-window signal of high band decoded signal of the current voice frame is delayed, a judgment is made about whether a previous voice frame high band decoded signal is received, if the signal is received, the first semi-window signal of high band decoded signal of the current voice frame and the second semi-window signal of previous voice frame high band decoded signal are superposed to produce a high band decoded signal of the previous voice frame ....
IX. Lastly, the appellant argued that the method of independent claim 4 differed from the method of D8 by the corresponding functionalities.
X. In a communication under Article 15(1) RPBA, the appellant was informed of the Board's preliminary view. Besides various new objections under Articles 84 and 123(2) EPC, the Board endorsed the Examining Division's analysis developed in the passage bridging pages 8 and 9 of the impugned decision, with regard to lack of novelty of the claimed subject-matter vis-a-vis D7 and D8. The appellant's view that D8 did not disclose any detailed method about how the high band decoded signal of the previous voice frame was produced, did not persuade the Board.
XI. In the reply to the communication, the appellant did not comment on the preliminary findings of the Board, but indicated that they would not attend the oral proceedings, which were, accordingly, cancelled.
Clarity and added subject-matter
1. The appellant did not comment on the objections raised by the Board under these topics. The Board has no reason to diverge from its preliminary view. Thus, the subject-matter of claims 1 and 4 is not clearly defined, contrary to Article 84 EPC; moreover, the subject-matter of dependent claim 6 defines added subject-matter, contrary to Article 123(2) EPC.
2. The main obstacle to the grant of a patent, though, resides in the lack of novelty of the claimed subject-matter, as expounded below.
Novelty - Article 54 EPC
3. Reference is made primarily to document D7. Considering that D7 constitutes a specific implementation of the
G.729.1 standard, defined by ITU-T (see sections "Abstract" and "Introduction" in D7), the content of D8, which defines said standard constitutes an inherent part of the teaching of D7, as acknowledged by the Examining Division.
4. The appellant's submission that D8 does not disclose any detailed method about how the high band decoded signal of the previous voice frame is produced, does not reflect the actual content of D8.
5. Equation 127 of D8 (page 53, section 7.3.8)
FORMULA/TABLE/GRAPHIC
establishes how the high band decoded signal of the current voice frame is obtained, using the Modified Discrete Cosine Transformation (MDCT).
6. Equation 128
FORMULA/TABLE/GRAPHIC
further defines how the high band decoded signal of the previous voice frame is obtained by superposing the first semi-window pertaining to the high band signal of the current frame with the second semi-window of corresponding previous frame. This results from the reference to parameter n+160 for the synthesis weighting window wTDAC in equation 128. The weighting window wTDAC is two-frames long and consists of first and second semi-windows, in the terminology of the present application.
7. The view that section 7.3.8 of D8 does not provide any indication as to how the high band decoded signal of the previous voice frame is produced is not correct, since equation 127 applies to each successive (current) voice frame and thus also defines the high band decoded signal of the previous voice frames.
8. Novelty of claim 1 appears thus to hinge on the manner in which the high band decoded signal of a voice frame (current or previous) is produced when high band information has been lost.
9. In this respect, reference is made to the indication in D7 that:
in the high band, the decoder is supplied with the previously received TDBWE time and frequency envelope parameters ...
(D7: page IV-532, section 3.5, lines 12-14).
Thus, D7 reflects the teaching in D8 (page 41, section 7.2 and page 42, figure 8) that previously received TDBWE (Time-Domain Bandwidth Extension) time and frequency envelope parameters are used to generate TDBWE synthesis signal S**(bwe)HB(n), which are then transformed to MDCT coefficients S**(bwe)HB(k) (cf. D8, page 49, lines 1-3). These coefficients are required to replace the non-received sub-bands in the higher band part of the spectrum (section 7.3.6, on page 52), that is, in order to determine the data corresponding to SHB(m) appearing in equation 127 of D8.
10. In the case that a previous voice frame high band decoded signal has not been received, a signal is recovered via the previously received TDBWE time and frequency envelope parameters, thus anticipating both the method of claim 4 and its implementation in the device of claim 1.
11. The subject-matters of claims 1 and 4 are thus not new in the sense of Article 54 EPC.
For these reasons it is decided that:
The appeal is dismissed.