`
`Application No. 17/302,699
`
`Amendments to the Claims
`
`This listing of claims will replace all prior versions, and Listings, of claims in the application.
`
`1-18.
`
`(€anceled)
`
`19,
`
`(New) A method, comprising:
`
`capturing an image or a sequence of images;
`
`detecting pose information fromthe image or the sequence of images;
`
`converting the pose information into a feature vector;
`
`converting the feature vector into a sign languagestring; and
`
`translating the sign language string into a target language output.
`
`20.
`
`(New) The method of claim 19, wherein the image or the sequences of images is a 1, 2, or 3
`
`channel image.
`
`ai.
`
`(New) The method of claim 19, wherein converting the feature vector comprises:
`
`splitting the feature vector into individual regions; and
`
`processing the individual regions into a sign language string.
`
`22,
`
`(New) The method of claim 19, wherein converting the feature vector comprises applying a
`
`Convolutional Neural Network (CNN) configured to output one or more flag values associated with
`
`an intrasign region, an intersign region, or a non-signing region, and wherein the one or more flag
`
`values correspond to an individual sign.
`
`23.
`
`(New} The method of claim 21, wherein processing the individual regions comprises
`
`determining whether the individual regions are one of a pre-recorded sentence or an individual sign
`
`in one or more databases,
`
`(New) The method of claim 21, wherein processing the individual regions comprises
`
`™2
`
`4,
`
`applying a binaryclassifier to determine whether one or more of the individual regions is
`
`fingerspelled.
`
`Atty. Dkt. No. 4959.9020000
`
`
`
`-3-
`
`Application No. 17/302,699
`
`25.
`
`(New) The method of claim 21, wherein processing the individual regions comprises:
`
`comparing the individual regions to signs in one or more databases to generate comparison
`
`results; and
`
`choosing a sign based on a K Nearest Neighbor function or a Dynamic Time Warping
`
`function applied to the comparison results.
`
`26.
`
`(New) A method, comprising:
`
`detecting hand pose information from an image:
`
`converting the hand pose information into a flattened feature vector;
`
`normalizing the flattened feature vector into a resultant feature vector;
`
`applying a convolutional neural network (CNN) to split the resultant feature vector into
`
`individual regions;
`
`processing the individual regions into a sign languagestring based on language information
`
`in one or more databases; and
`
`translating the sign language string into a target language output.
`
`27,
`
`(New) The method of claim 26, wherein normalizing the flattened feature vector conrprises:
`
`setting head coordinates to be (0, 0} in a pose and shoulders to be an average of 1 unit via a
`
`tirst affine transform: and
`
`setting mean coordinates of one or more hands to be (0, 0, 0), wherein a standard deviation
`
`in the mean coordinates is an average of | unit via a second affine transform.
`
`val
`8.
`
`bo
`
`(New) The method of claim 26, wherein applying the CNNcomprises:
`
`outputting one or more flag values associated with an intrasign region, an intersign region,
`
`or a Hon-signing region, wherein the one or more flag values correspond to an individual sign.
`
`29.
`
`(New} The method of claim 26, wherein processing the individual regions comprises
`
`determining whether the individual regions are one of a pre-recorded seritence or an individual sign
`
`Atty. Dkt. No. 4959.0020000
`
`
`
`-4-
`
`Application No. 17/302,699
`
`in the one or more databases by applying a K Nearest Neighbor function or a Dynamic Tume
`
`Warping function on the individual regions.
`
`30.
`
`€New) The method of claim 26, wherein processing the individual regions comprises
`
`applying a binaryclassifier to determine whether one or more of the individual regions is
`
`fingerspelled.
`
`31.
`
`(New) The method of claim 26, wherein processing the individual regions comprises:
`
`comparing the one or more of the individual regions to signs imthe one or more databases to
`
`generate comparison results; and
`
`choosing a sign based on a K Nearest Neighbor function or a Dynamic Time Warping
`
`function applied to the comparison results.
`
`32.
`
`(New) The method of claim 26, further comprising outputting a plurality ofselectable
`
`translations associated with the target language output.
`
`tao tao
`
`(New) A device, comprising:
`
`a camera configured to capture an image;
`
`a computing device coupled to the camera, the computing device comprising:
`
`a processor; and
`
`a tnemory, wherein the memory contains instructions stored thereon that when
`
`executed by the processor cause the processorto:
`
`detect hand pose information from the image;
`
`convert the hand pose information into a flattened feature vector;
`
`normalize the flattened feature vector into a resultant feature vector:
`
`apply a convolutional neural network (CNN)to split the resultant feature
`
`vector into individual regions;
`
`process the individual regions into a sign language string based on language
`
`information in one or more databases; and
`
`translate the sign language string into a target language output.
`
`Atty. Dkt. No. 4959.0020000
`
`
`
`‘
`
`‘
`
`GR
`
`Application No. 17/302,699
`
`34,
`
`(New) The device of claim 33, wherein to apply the CNN, the processoris configured to:
`
`output one or more flag values associated with an intrasign region, an intersign region, or a
`
`non-signing region, wherein the one or more flag values correspond to an individual sign.
`
`35.
`
`(New) The device of claim 33, wherein to process the individual regions, the processoris
`
`configured to determine whether the individual regions are one of a pre-recorded sentence or an
`
`individual signin the one or more databases by applying a K Nearest Neighbor function ora
`
`Dynamic Time Warping function on the individual regions.
`
`36.
`
`(New) The device of claim 33, wherein to process the individual regions, the processoris
`
`configured to apply a binary classifier to determine whether one or more of the individual regions is
`
`fingerspelled.
`
`37.
`
` €New) The device of claim 33, wherein to process the individual regions, the processoris
`
`configured to:
`
`compare the individual regions to signs in the one or more databases to generate comparison
`
`results: and
`
`choose a sign based on a K Nearest Neighbor function or a Dynamic Time Warping function
`
`applied to the comparison results.
`
`38.
`
`(New) The device of claim 33, wherein the processor is further configured to output a
`
`plurality of selectable translations associated with the target language output.
`
`Atty. Dkt. No. 4959.0020000
`
`