pymepix.processing.logic package¶
Submodules¶
pymepix.processing.logic.centroid_calculator module¶
- class pymepix.processing.logic.centroid_calculator.CentroidCalculator(cent_timewalk_lut=None, number_of_processes=1, clustering_args={}, dbscan_clustering=True, *args, **kwargs)[source]¶
Bases:
ProcessingStep
Class responsible for calculating centroids in timepix data. This includes the calculation of the clusters first and the centroids. The data processed is not the direct raw data but the data that has been processed by the PacketProcessor before (x, y, tof, tot).
- process(data):
Process data and return the result. To use this class only this method should be used! Use the other methods only for testing or if you are sure about what you are doing
- centroid_chunks_to_centroids(chunks)[source]¶
centroids = [[] for i in range(7)] for chunk in list(chunks):
- if chunk != None:
- for index, coordinate in enumerate(chunk):
centroids[index].append(coordinate)
- property cs_max_dist_tof¶
Setting the maximal ToF distance between the voxels belonging to the cluster in Cluster Streaming algorithm
- property cs_min_cluster_size¶
Setting the minimal cluster size in Cluster Streaming algorithm
- property cs_sensor_size¶
Setting for the number of packets skipped during processing. Every packet_skip packet is processed. This means for a value of 1 every packet is processed. For 2 only every 2nd packet is processed.
- property cs_tot_offset¶
Setting the ToT ratio factor of the voxel to the ToT of previous voxel in Cluster Streaming algorithm. Zero factor means ToT of prev. voxel should be larger. 0.5 factor means ToT of prev voxel could be high than the half of the considered voxel
- property dbscan_clustering¶
- property epsilon¶
- property min_samples¶
- property tot_threshold¶
Determines which time over threshold values to filter before centroiding
This is useful in reducing the computational time in centroiding and can filter out noise.
- property triggers_processed¶
Setting for the number of packets skipped during processing. Every packet_skip packet is processed. This means for a value of 1 every packet is processed. For 2 only every 2nd packet is processed.
- class pymepix.processing.logic.centroid_calculator.CentroidCalculatorPooled(number_of_processes=None, *args, **kwargs)[source]¶
Bases:
CentroidCalculator
Parallelized implementation of CentroidCalculator using mp.Pool for parallelization.
- pymepix.processing.logic.centroid_calculator.calculate_centroids_dbscan(chunk, tot_threshold, _tof_scale, epsilon, min_samples, _cent_timewalk_lut)[source]¶
- pymepix.processing.logic.centroid_calculator.calculate_centroids_properties(shot, x, y, tof, tot, labels, _cent_timewalk_lut)[source]¶
Calculates the properties of the centroids from labeled data points.
ATTENTION! The order of the points can have an impact on the result due to errors in the floating point arithmetics.
Very simple example: arr = np.random.random(100) arr.sum() - np.sort(arr).sum() This example shows that there is a very small difference between the two sums. The inaccuracy of floating point arithmetics can depend on the order of the values. Strongly simplified (3.2 + 3.4) + 2.7 and 3.2 + (3.4 + 2.7) can be unequal for floating point numbers.
Therefore there is no guarantee for strictly equal results. Even after sorting. The error we observed can be about 10^-22 nano seconds.
Currently this is issue exists only for the TOF-column as the other columns are integer-based values.
- pymepix.processing.logic.centroid_calculator.perform_clustering_dbscan(shot, x, y, tof, _tof_scale, epsilon, min_samples)[source]¶
The clustering with DBSCAN, which is performed in this function is dependent on the order of the data in rare cases. Therefore, reordering in any means can lead to slightly changed results, which should not be an issue.
Martin Ester, Hans-Peter Kriegel, Jiirg Sander, Xiaowei Xu: A Density Based Algorithm for Discovering Clusters [p. 229-230] (https://www.aaai.org/Papers/KDD/1996/KDD96-037.pdf) A more specific explaination can be found here: https://stats.stackexchange.com/questions/306829/why-is-dbscan-deterministic
pymepix.processing.logic.datatypes_tpx4 module¶
Defines different packet types in Timepix4
- class pymepix.processing.logic.datatypes_tpx4.PacketType(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Bases:
IntEnum
Defines different types of Timepix4 packet
- CtrlDataTest = 234¶
Continuous flow of test packets
- CtrlDataTestB = 235¶
Continuous flow of test packets
- DESYHeader = 248¶
Indicates DESY-specific header added to data stream
- FrameEnd = 241¶
Indicates end of frame in frame mode
- FrameStart = 240¶
Indicates start of frame in frame mode
- Heartbeat = 224¶
Heartbeat timestamp
- NoData = 4¶
Slow control link packets of FFFFFFFF00000000 indicate no data - this is a special case
- PC24bitData = 3¶
Data from photon counting 24 bit mode
- PixelData = 1¶
Decoded Pixel Data
- RawData = 0¶
Raw 8-byte word
- SequenceEnd = 243¶
Indicates end of sequence in frame mode
- SequenceStart = 242¶
Indicates start of sequence in frame mode
- ShutterFall = 226¶
Shutter fall timestamp
- ShutterRise = 225¶
Shutter rise timestamp
- SignalFall = 229¶
Configurable input signal fall timestamp
- SignalRise = 228¶
Configurable input signal timestamp
- T0Sync = 227¶
Synchronisation timestamp
- TriggerData = 2¶
Decoded Trigger Data (NB - only distinguishable from pixeldata after processing)
- Unknown = -1¶
Can be used to deal with other cases
pymepix.processing.logic.packet_processor module¶
- class pymepix.processing.logic.packet_processor.PacketProcessor(handle_events=True, event_window=(0.0, 10000.0), position_offset=(0, 0), orientation=PixelOrientation.Up, start_time=0, timewalk_lut=None, *args, **kwargs)[source]¶
Bases:
ProcessingStep
Class responsible to transform the raw data coming from the timepix directly into an easier processible data format. Takes into account the pixel- and trigger data to calculate toa and tof dimensions.
- process(data):
Process data and return the result. To use this class only this method should be used! Use the other methods only for testing or if you are sure about what you are doing
- property event_window¶
- find_events_fast_post()[source]¶
Call this function at the very end of to also have the last two trigger events processed
- property handle_events¶
noindex:
- class pymepix.processing.logic.packet_processor.PixelOrientation(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Bases:
IntEnum
Defines how row and col are intepreted in the output
- Down = 2¶
x=-column, y = -row
- Left = 1¶
x=row, y=-column
- Right = 3¶
x=-row, y=column
- Up = 0¶
Up is the default, x=column,y=row
pymepix.processing.logic.packet_processor_factory module¶
pymepix.processing.logic.packet_processor_tpx4 module¶
- class pymepix.processing.logic.packet_processor_tpx4.PacketProcessor_tpx4(handle_events=True, event_window=(0.0, 200000000.0), position_offset=(0, 0), orientation=PixelOrientation.Up, start_time=0, timewalk_lut=None, reversebytes=True, *args, **kwargs)[source]¶
Bases:
ProcessingStep
Class responsible to transform the raw data coming from the timepix directly into an easier processible data format. Takes into account the pixel- and trigger data to calculate toa and tof dimensions.
- process(data):
Process data and return the result. To use this class only this method should be used! Use the other methods only for testing or if you are sure about what you are doing
- DESYheaderdecode(rawpackets, array=False)[source]¶
Placeholder function for dealing with packet headers added by our firmware. I suggest an 8-byte header as discussed below. First byte would be 7C, and corresponding “end of column” code F8 (since this code is bits 55:62)
- Parameters:
rawpacket (8-byte integer value (big endian format))
array (Boolean, default false. If true, input consists of an array that must be converted to 64 bit int.)
- Returns:
The decoded packet, with the following elements (proposed): - PacketType.DESYHeader - Chip number (0-127) - Image number (32 bit - up to 4e9) - Packet number (16 bit - up to 65000)
- Return type:
tuple
- property PC24bit¶
- controleventdecode(rawpackets)[source]¶
Decodes special packets in Frame mode aimed at helping image decoding. The image is split up into “sequences”, each of which corresponds to a different region of the chip, and each frame and sequence has a header and trailer (FrameStart, FrameEnd, SequenceStart, SequenceEnd)
- Parameters:
rawpacket (8-byte integer value (big endian format))
- Returns:
The decoded packet, with the following elements: - PacketType - Top - indicates if the data came from the top half (1) or bottom half (0) of the chip - Segment - each segment consists of 1/8 of the columns from one half of the chip - see manual - Readout mode - int Enum defined in datatypes.py, indicating 8 bit or 16 bit.
- Return type:
tuple
- dataPC24bitdecode(rawpackets)[source]¶
Decodes 24 bit photon counting mode data.
- Parameters:
rawpacket (8-byte integer value (big endian format, I believe))
- Returns:
The decoded packet, with the following elements: - PacketType.PC24bitData - Pixel x co-ordinate - Pixel y co-ordinate - 24-bit count value
- Return type:
tuple
- property event_window¶
- find_events_fast_post()[source]¶
Call this function at the very end of to also have the last two trigger events processed
- property handle_events¶
noindex:
- process(data)[source]¶
The data should contain udp packets with headers at the beginning with size 54 bytes, the payload is 4960 bytes, therefore udp packetsize is 5014 bytes: PACKET_HEADER_SIZE = 54 PACKET_LOAD_SIZE = 4960 PACKET_SIZE = 5014 New input: Data is actually started from 61, lst byte to ignore
- process_pixels(rawpackets, gray=False)[source]¶
Decodes pixel hit data.
- Parameters:
rawpacket (8-byte integer value (big endian format, I believe))
- Returns:
The decoded packet, with the following elements: - PacketType.PixelData - Pixel x co-ordinate - Pixel y co-ordinate - Time of arrival (ns). This incorporates all three pixel internal timers, but not additional global information (e.g from heartbeat signal) and so rolls over every 1.5ms. Additionally, does not (yet) implement timewalk correction - Time over threshold (ns) - Pileup - indicates pileup occurred (1) or not (0)
- Return type:
tuple
- timestampeventdecode(rawpackets)[source]¶
Various Timepix4 special events which include timestamp data. These are Heartbeat, ShutterRise/Fall, T0Sync, SignalRise/Fall and CtrlDataTest
- Parameters:
rawpacket (8-byte integer value (big endian format))
- Returns:
The decoded packet, with the following elements: - PacketType - Timestamp (ns). This is 48 bits long with a 40 MHz clock - so it has 25ns resolution and a range of up to 7 million s (7e15 ns)
- Return type:
tuple
- class pymepix.processing.logic.packet_processor_tpx4.PixelOrientation(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Bases:
IntEnum
Defines how row and col are intepreted in the output
- Down = 2¶
x=-column, y = -row
- Left = 1¶
x=row, y=-column
- Right = 3¶
x=-row, y=column
- Up = 0¶
Up is the default, x=column,y=row
pymepix.processing.logic.processing_step module¶
- class pymepix.processing.logic.processing_step.ProcessingStep(name, parameter_wrapper_class=<class 'pymepix.processing.logic.shared_processing_parameter.SharedProcessingParameter'>)[source]¶
Bases:
ABC
Representation of one processing step in the pipeline for processing timepix raw data. Implementations are provided by PacketProcessor and CentroidCalculator. To combine those (and possibly other) classes into a pipeline they have to implement this interface. Also provides pre- and post-process implementations which are required for integration in the online processing pipeline (see PipelineCentroidCalculator and PipelinePacketProcessor).
- Currently the picture is the following:
For post processing the CentroidCalculator and the PacketProcessor are used directly
PipelineCentroidCalculator and PipelinePacketProcessor build on top of CentroidCalculator and PacketProcessor to provide an integration in the existing online processing pipeline for online analysis.