- Document Number:
20250108813
- Appl. No:
18/375908
- Application Filed:
October 02, 2023
- Abstract:
Method and system for predicting driving behaviors of a driver by transforming trip data into an image representation are disclosed. For example, the method includes receiving trip data of one or more trips of a driver, dividing the trip data into a plurality of trip data segments based on a predetermined time period, wherein each trip data segment corresponds to a portion of the one or more trips, transforming the plurality of trip data segments into the image representation, and determining predicted driving behaviors of the driver based on the image representation of the one or more trips using a prediction model.
- Claim:
1. A computer-implemented method for predicting driving behaviors of a driver by transforming trip data into an image representation, the method comprising: receiving, by one or more processors, trip data of one or more trips of the driver from one or more sensors; dividing, by one or more processors, the trip data into a plurality of trip data segments based on a predetermined time period, each trip data segment corresponding to a portion of the one or more trips; transforming, by one or more processors, the plurality of trip data segments into the image representation; and determining, by one or more processors, predicted driving behaviors of the driver based on the image representation of the one or more trips using a prediction model.
- Claim:
2. The method of claim 1, wherein transforming the plurality of trip data segments into the image representation comprises: generating, for each trip data segment, a graphical representation representing relative positions of the driver during the predetermined time period by extracting location information from the corresponding trip data segment; adding depth to each point of the graphical representation; and generating an image representation for each trip data segment.
- Claim:
3. The method of claim 2, wherein the depth includes one or more channels that represent sensor data, and the image representation is an n-dimensional graphical representation with n number of sensor data associated with each trip data segment.
- Claim:
4. The method of claim 1, wherein determining the predicted driving behaviors of the driver based on the image representation of the one or more trips using the prediction model comprises: dividing the image representation into smaller patches using a patchify algorithm; and determining the predicted driving behaviors of the driver by inputting the smaller patches of the image representation into the prediction model.
- Claim:
5. The method of claim 1, wherein determining predicted driving behaviors of the driver based on the image representation of the one or more trips using the prediction model comprises: extracting one or more features from the image representation of the one or more trips; and determining the predicted driving behaviors of the driver based at least in part upon the one or more extracted features.
- Claim:
6. The method of claim 5, wherein the one or more extracted features includes sudden acceleration or braking, frequent braking, sharp cornering, and/or slow cornering.
- Claim:
7. The method of claim 1, further comprising training the prediction model, wherein training the prediction model comprises: receiving reference trip data of reference trips of a plurality of reference drivers; transforming the reference trip data into a training image representation; and training the prediction model using the training image representation of the reference trips; wherein the reference trip data includes telematics data associated with the reference trips taken by at least one driver of the plurality of reference drivers.
- Claim:
8. The method of claim 7, wherein transforming the reference trip data into the training image representation comprises: dividing, for each reference trip, the reference trip data into a plurality of reference trip data segments based on a predetermined time period, each reference trip data segment corresponding to a portion of the reference trips; generating, for each reference trip data segment, a graphical representation representing relative positions of the corresponding driver during the predetermined time period by extracting location information from the corresponding reference trip data segment; adding depth to each point of the graphical representation, wherein the depth includes one or more channels that represent sensor data associated with the corresponding reference trip data segment; and generating an image representation for each reference trip data segment.
- Claim:
9. The method of claim 8, wherein the training image representation is an n-dimensional graphical representation with n number of sensor data associated with each reference trip data segment.
- Claim:
10. The method of claim 8, further comprising: dividing the image representation for each reference trip data segment into smaller patches using a patchify algorithm; and training the prediction model using the smaller patches of the training image representation.
- Claim:
11. The method of claim 1, wherein the prediction model is a convolution neural network (CNN).
- Claim:
12. A computing device for predicting driving behaviors of a driver by transforming trip data into an image representation, the computing device comprising: a processor; and a memory having a plurality of instructions stored thereon that, when executed by the processor, causes the computing device to: receive trip data of one or more trips of the driver from one or more sensors; divide the trip data into a plurality of trip data segments based on a predetermined time period, each trip data segment corresponding to a portion of the one or more trips; transform the plurality of trip data segments into the image representation; and determine predicted driving behaviors of the driver based on the image representation of the one or more trips using a prediction model.
- Claim:
13. The computing device of claim 12, wherein to transform the plurality of trip data segments into image representation comprises to: generate, for each trip data segment, a graphical representation representing relative positions of the driver during the predetermined time period by extracting location information from the corresponding trip data segment; add depth to each point of the graphical representation; and generate an image representation for each trip data segment, wherein the depth includes one or more channels that represent sensor data, and the image representation is an n-dimensional graphical representation with n number of sensor data associated with each trip data segment.
- Claim:
14. The computing device of claim 12, wherein to determine predicted driving behaviors of the driver based on the image representation of the one or more trips using the prediction model comprises to: extract one or more features from the image representation of the one or more trips; and determine the predicted driving behaviors of the driver based at least in part upon the one or more extracted features.
- Claim:
15. The computing device of claim 12, wherein the plurality of instructions, when executed, further cause the computing device to train the prediction model, wherein to train the prediction model comprises to: receive reference trip data of reference trips of a plurality of reference drivers; transform the reference trip data into a training image representation; and train the prediction model using the training image representation of the reference trips; wherein the reference trip data includes telematics data associated with the reference trips taken by at least one driver of the plurality of reference drivers.
- Claim:
16. The computing device of claim 15, wherein to transform the reference trip data into the training image representation comprises to: divide, for each reference trip, the reference trip data into a plurality of reference trip data segments based on a predetermined time period, each reference trip data segment corresponding to a portion of the reference trips; generate, for each reference trip data segment, a graphical representation representing relative positions of the corresponding driver during the predetermined time period by extracting location information from the corresponding reference trip data segment; add depth to each point of the graphical representation, wherein the depth includes one or more channels that represent sensor data associated with the corresponding reference trip data segment; and generate an image representation for each reference trip data segment, wherein the training image representation is an n-dimensional graphical representation with n number of sensor data associated with each reference trip data segment.
- Claim:
17. A non-transitory computer-readable medium storing instructions for predicting driving behaviors of a driver by transforming trip data into an image representation, cause the computing device to: receive trip data of one or more trips of the driver from one or more sensors; divide the trip data into a plurality of trip data segments based on a predetermined time period, each trip data segment corresponding to a portion of the one or more trips; transform the plurality of trip data segments into the image representation; and determine predicted driving behaviors of the driver based on the image representation of the one or more trips using a prediction model.
- Claim:
18. The non-transitory computer-readable medium of claim 17, wherein to transform the plurality of trip data segments into the image representation comprises to: generate, for each trip data segment, a graphical representation representing relative positions of the driver during the predetermined time period by extracting location information from the corresponding trip data segment; add depth to each point of the graphical representation; and generate an image representation for each trip data segment, wherein the depth includes one or more channels that represent sensor data, and the image representation is an n-dimensional graphical representation with n number of sensor data associated with each trip data segment.
- Claim:
19. The non-transitory computer-readable medium of claim 17, wherein the computing device is further to train the prediction model, wherein to train the prediction model comprises to: receive reference trip data of reference trips of a plurality of reference drivers; transform the reference trip data into a training image representation; and train the prediction model using the training image representation of the reference trips; wherein the reference trip data includes telematics data associated with the reference trips taken by at least one driver of the plurality of reference drivers.
- Claim:
20. The non-transitory computer-readable medium of claim 19, wherein to transform the reference trip data into the training image representation comprises to: divide, for each reference trip, the reference trip data into a plurality of reference trip data segments based on a predetermined time period, each reference trip data segment corresponding to a portion of the reference trips; generate, for each reference trip data segment, a graphical representation representing relative positions of the corresponding driver during the predetermined time period by extracting location information from the corresponding reference trip data segment; add depth to each point of the graphical representation, wherein the depth includes one or more channels that represent sensor data associated with the corresponding reference trip data segment; and generate an image representation for each reference trip data segment, wherein the training image representation is an n-dimensional graphical representation with n number of sensor data associated with each reference trip data segment.
- Current International Class:
60; 06; 06; 06
- Accession Number:
edspap.20250108813
No Comments.