21 results on '"Pierre Clarel Catherine"'
Search Results
2. Early CU size determination in HEVC intra prediction using Average Pixel Cost.
- Author
-
Kanayah Saurty, Pierre Clarel Catherine, and K. M. S. Soyjaudah
- Published
- 2014
- Full Text
- View/download PDF
3. Parallel concatenation of recursive systematic convolutional codes with transmission of interleaved set.
- Author
-
Insah Bhurtah, Pierre Clarel Catherine, and K. M. Sunjiv Soyjaudah
- Published
- 2013
- Full Text
- View/download PDF
4. Terminating CU Processing in HEVC Intra-Prediction Adaptively Based on Residual Statistics
- Author
-
Pierre Clarel Catherine, Kanayah Saurty, and K. M. S. Soyjaudah
- Subjects
Normalization property ,Computer science ,Statistics ,Residual ,Coding tree unit ,Coding (social sciences) ,Data compression - Abstract
The current standard in video compression, High-Efficiency Video Coding (HEVC/H.265), provides superior compression performances compared to its H.264 predecessor. However, considerable increase in processing time is brought about with the large Coding Tree Unit (CTU) in H.265. In this paper, a method of terminating the Coding Unit (CU) earlier is proposed based on the luma residual statistics gathered during the encoding of the initial frames of the sequence. The gathered statistics are then formulated into thresholds adaptively and are used to overcome the unnecessary processing of potential CUs during subsequent frames. Experimental results obtained indicate that the encoding time can be reduced by 36.1% on average compared to HM16 along with a BD-Rate of only 0.29%.
- Published
- 2019
5. An Investigation of the TCP Meltdown Problem and Proposing Raptor Codes as a Novel to Decrease TCP Retransmissions in VPN Systems
- Author
-
Irfaan Coonjah, K. M. S. Soyjaudah, and Pierre Clarel Catherine
- Subjects
Focus (computing) ,business.industry ,Computer science ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Scalability ,Process (computing) ,Layer (object-oriented design) ,business ,Protocol (object-oriented programming) ,Queue ,Raptor code ,Computer network - Abstract
When TCP was designed, the protocol designers at this time did not cater for the problem of running TCP within itself and the TCP dilemma was not originally addressed. The protocol is meant to be reliable and uses adaptive timeouts to decide when a resend should occur. This design can fail when stacking TCP connections though, and this type of network slowdown is known as a “TCP meltdown problem.” This happens when a slower outer connection causes the upper layer to queue up more retransmissions than the lower layer is able to process. Some computer scientists designed a Virtual Private Networking product (OpenVPN) to accommodate problems that may occur when tunneling TCP within TCP. They designed the VPN to use UDP as the base for communication to increase the performance. But the problem with UDP is said to be unreliable and not all VPN systems support UDP tunneling. This paper seeks to provide systems with low-latency primitives for reliable communication that are fundamentally scalable and robust. The focus of the authors is on proposing raptor codes to solve the TCP meltdown problems in VPN systems and decrease delays and overheads. The authors of this paper will simulate the TCP meltdown problem inside a VPN tunnel.
- Published
- 2018
6. A PEG Construction of LDPC Codes Based on the Betweenness Centrality Metric
- Author
-
I. Bhurtah-Seewoosungkur, K. M. S. Soyjaudah, and Pierre Clarel Catherine
- Subjects
Discrete mathematics ,lcsh:Computer engineering. Computer hardware ,General Computer Science ,channel coding ,AWGN channels ,020206 networking & telecommunications ,lcsh:TK7885-7895 ,02 engineering and technology ,Combinatorics ,block codes ,error correction codes ,Betweenness centrality ,Metric (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Electrical and Electronic Engineering ,Low-density parity-check code ,parity check codes ,lcsh:TK1-9971 ,Mathematics ,MathematicsofComputing_DISCRETEMATHEMATICS - Abstract
Progressive Edge Growth (PEG) constructions are usually based on optimizing the distance metric by using various methods. In this work however, the distance metric is replaced by a different one, namely the betweenness centrality metric, which was shown to enhance routing performance in wireless mesh networks. A new type of PEG construction for Low-Density Parity-Check (LDPC) codes is introduced based on the betweenness centrality metric borrowed from social networks terminology given that the bipartite graph describing the LDPC is analogous to a network of nodes. The algorithm is very efficient in filling edges on the bipartite graph by adding its connections in an edge-by-edge manner. The smallest graph size the new code could construct surpasses those obtained from a modified PEG algorithm - the RandPEG algorithm. To the best of the authors' knowledge, this paper produces the best regular LDPC column-weight two graphs. In addition, the technique proves to be competitive in terms of error-correcting performance. When compared to MacKay, PEG and other recent modified-PEG codes, the algorithm gives better performance over high SNR due to its particular edge and local graph properties.
- Published
- 2016
7. Design and Implementation of UDP Tunneling-based on OpenSSH VPN
- Author
-
Pierre Clarel Catherine, K. M. S. Soyjaudah, and Irfaan Coonjah
- Subjects
Open source ,Computer science ,business.industry ,Bandwidth (signal processing) ,Encryption ,business ,Computer network - Abstract
This paper focuses on two commonly used VPNs; OpenVPN and OpenSSH. Both VPN solutions are Open Source and are cross-platform, secure, highly configurable. OpenSSH forms part of a big research group, OpenBSD and is integrated in all routers, switches and almost all operating systems by default. Compared to OpenSSH, OpenVPN is less widely used and the research group is small. OpenVPN can be used only for tunneling purposes, whereas OpenSSH has many features and tunneling is only one of the features. The research by the OpenBSD developers is moving towards security and encryption and has not progress in the field of VPN tunneling since 2011. The only weakness with OpenSSH VPN is that is does not support UDP as the mode of communication, whereas OpenVPN can use both TCP and UDP as the mode of communication. There is a need for UDP tunnel in industries in situations where bandwidth is critical (satellite) and the default option for VPN systems is OpenVPN. The authors of this paper modified the OpenSSH implementation was by adding support for a UDP base connection to its VPN functionality so that OpenSSH can be the VPN of choice for Industries.
- Published
- 2018
8. Fast adaptive inter-splitting decisions for HEVC based on luma residuals
- Author
-
K. M. S. Soyjaudah, Pierre Clarel Catherine, and Kanayah Saurty
- Subjects
Reduction (complexity) ,Motion estimation ,Algorithmic efficiency ,Real-time computing ,Quadtree ,Encoder ,Algorithm ,Coding tree unit ,Random access ,Coding (social sciences) ,Mathematics - Abstract
The long encoding time of High Efficiency Video Coding (HEVC) compared to its predecessor, Advanced Video Coding (AVC), is mostly associated with the large number of Coding Units (CUs) to be processed during the quad tree splitting of the 64 × 64 Coding Tree Unit (CTU) along with the improved but intensive Motion Estimation (ME) techniques. In this paper, the unnecessary processing of some CUs during the recursive splitting of the CTU along with some of the two-PUs mode operations are skipped so as to bring significant time reduction for the encoder. Statistical distributions of the HEVC splitting decisions based on the Mean Square (MS) values of each 8 × 8 block within the 2N × 2N luma residuals are adaptively constructed during the starting frames for each sequence. Thereafter, thresholds for early termination of the CU and early identification of the 2N × 2N PU mode based on these distributions are applied during the encoding of subsequent inter frames. The proposed inter-mode scheme significantly reduces the total encoding time with negligible loss of coding efficiency. Experimental results show that the proposed scheme effectively achieves 47.0% encoding time savings with a Bjontegaard Delta bitrate (BDBR) increase of only 0.57% for various test sequences under random access conditions.
- Published
- 2017
9. Evaluation of UDP tunnel for data replication in data centers and cloud environment
- Author
-
Irfaan Coonjah, K. M. S. Soyjaudah, and Pierre Clarel Catherine
- Subjects
021110 strategic, defence & security studies ,business.industry ,Computer science ,Network packet ,Reliability (computer networking) ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,05 social sciences ,0211 other engineering and technologies ,050801 communication & media studies ,Cloud computing ,02 engineering and technology ,UDP flood attack ,Replication (computing) ,0508 media and communications ,Packet loss ,Synchronization (computer science) ,Data center ,business ,Computer network - Abstract
The era of Data Centers is underway. Cloud is an off-premise form of computing that stores data on the Internet, whereas a data center refers to on-premise. For cloud-hosting purposes, vendors also often own multiple data centers in several geographic locations to safeguard data availability during outages and other data center failures. Data replication and synchronization techniques have recently attracted a lot of attention of researchers from computing community. Data transferred during replication needs to be secure. Tunneling is one way to secure information before sending it to off-premise replication site. The two kinds of tunneling that exists are TCP and UDP tunnel. UDP tunnel is claimed to be faster during the transfer of data when compared to TCP tunnel, the main reason is that UDP does not make use of excessive acknowledgment messages and also UDP does not suffer the tcp-meltdown problem. In situations where bandwidth is limited, UDP is the preferred option. This paper addresses the reliability of UDP tunnel, while measuring the packet drops when sending different packet sizes. The authors want to determine whether UDP tunnel can be used as the mode of transfer during data replication. A series of tests have been performed and the MTU size have been adjusted for a minimal packet loss. The authors demonstrate that UDP tunnel with an MTU size of 1150 bytes can be used as the mode of transfer for data centers.
- Published
- 2016
10. Fast Intra Mode Decision for HEVC
- Author
-
Kanayah Saurty, Pierre Clarel Catherine, and K. M. S. Soyjaudah
- Subjects
Computer science ,Prediction methods ,Time saving ,Rate distortion ,Algorithm ,Intra mode ,Encoder complexity ,Coding (social sciences) ,Data compression - Abstract
High Efficiency Video Coding (HEVC/H.265), the latest standard in video compression, aims to halve the bitrate while maintaining the same quality or to achieve the same bitrate with an improved quality compared to its predecessor, AVC/H.264. However, the increase in prediction modes in HEVC significantly impacts on the encoder complexity. Intra prediction methods indeed iterate among 35 modes for each Prediction Unit (PU) to select the most optimal one. This mode decision procedure which consumes around 78 % of the time spent in intra prediction consists of the Rough Mode Decision (RMD), the simplified Rate Distortion Optimisation (RDO) and the full RDO processes. In this chapter considerable time reduction is achieved by using techniques that use fewer modes in both the RMD and the simplified RDO processes. Experimental results show that the average time savings of the proposed method indeed yields a 42.1 % time savings on average with an acceptable drop of 0.075 dB in PSNR and a negligible increase of 0.27 % in bitrate.
- Published
- 2016
11. Performance evaluation and analysis of layer 3 tunneling between OpenSSH and OpenVPN in a wide area network environment
- Author
-
Pierre Clarel Catherine, Irfaan Coonjah, and K. M. S. Soyjaudah
- Subjects
Computer science ,business.industry ,Secure Shell ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Shared medium ,Encryption ,Tunneling protocol ,law.invention ,Wide area network ,law ,Internet Protocol ,The Internet ,business ,Private network ,Computer network - Abstract
Virtual Private Networks (VPNs) provide a secure encrypted communication between remote networks worldwide by using Internet Protocol(IP) tunnels and a shared medium like the Internet. End-to-end connectivity is established by tunneling. OpenVPN and OpenSSH are cross-platform, secure, highly configurable VPN solutions. The performance comparison however between OpenVPN and OpenSSH VPN has not yet been undertaken. This paper focuses on such comparison and evaluates the efficiency of these VPNs over Wide Area Network (WAN) connections. The same conditions are maintained for a fair comparison. To the best knowledge of the authors, this is the first reported test results of these two commonly used VPN technologies. Three parameters, namely speed, latency and jitter are evaluated. Using a real life scenario with deployment over the Linux Operating System, a comprehensive in-depth comparative analysis of the VPN mechanisms is provided. Results of the analysis between OpenSSH and OpenVPN show that OpenSSH utilizes better the link and significantly improves transfer times.
- Published
- 2015
12. Experimental performance comparison between TCP vs UDP tunnel using OpenVPN
- Author
-
K. M. S. Soyjaudah, Irfaan Coonjah, and Pierre Clarel Catherine
- Subjects
Engineering ,TCP acceleration ,Point-to-Point Tunneling Protocol ,business.industry ,Datagram ,Performance comparison ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,TCP hole punching ,Zeta-TCP ,Latency (engineering) ,business ,UDP flood attack ,Computer network - Abstract
The comparison between TCP and UDP tunnels have not been sufficiently reported in the scientific literature. In this work, we use OpenVPN as a platform to demonstrate the performance between TCP/UDP. The de facto belief has been that TCP tunnel provides a permanent tunnel and therefore ensures a reliable transfer of data between two end points. However the effects of transmitting TCP within a UDP tunnel has been explored and could provide a valuable attempt. The results provided in this paper demonstrates that indeed TCP in UDP tunnel provides better latency. Throughout this paper, a series of tests have been performed, UDP traffic was sent inside UDP tunnel and TCP tunnel successively. The same tests was performed using TCP traffic.
- Published
- 2015
13. 6to4 tunneling framework using OpenSSH
- Author
-
Irfaan Coonjah, Pierre Clarel Catherine, and K. M. S. Soyjaudah
- Subjects
Computer science ,computer.internet_protocol ,business.industry ,Secure Shell ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Tunneling protocol ,IPv4 ,IPv6 ,Hardware_INTEGRATEDCIRCUITS ,6to4 ,business ,computer ,Quantum tunnelling ,Computer network - Abstract
6to4 tunneling enables IPv6 hosts and routers to connect with other IPv6 hosts and routers over the existing IPv4Internet. The main purpose of IPv6 tunneling is to maintain compatibility with large existing base of IPv4 hosts and routers. OpenSSH VPN tunneling is said to have limitations with numerous IPv6 clients and therefore it is advisable to use OpenVPN. To the best knowledge of the authors, this is the first reported successful implementation of 6to4 tunneling over OpenSSH with more than one client. This proof-of-concept positions OpenSSH therefore as a potential alternative to conventional VPNs.
- Published
- 2015
14. EFFICIENT RECOVERY TECHNIQUE FOR LOW-DENSITY PARITY-CHECK CODES USING REDUCED-SET DECODING
- Author
-
K. M. S. Soyjaudah and Pierre Clarel Catherine
- Subjects
Block code ,Theoretical computer science ,Concatenated error correction code ,BCJR algorithm ,General Medicine ,Sequential decoding ,Serial concatenated convolutional codes ,Linear code ,Hardware and Architecture ,Electrical and Electronic Engineering ,Low-density parity-check code ,Algorithm ,Factor graph ,Mathematics - Abstract
We introduce a recovery algorithm for low-density parity-check codes that provides substantial coding gain over the conventional method. Concisely, it consists of an inference procedure based on successive decoding rounds using different subsets of bit nodes from the bipartite graph representing the code. The technique also sheds light on certain characteristics of the sum–product algorithm and effectively copes with the problems of trapping sets, cycles, and other anomalies that adversely affect the performance LDPC codes.
- Published
- 2008
15. Enhancing the error-correcting performance of LDPC codes for LTE and WiFi
- Author
-
Pierre Clarel Catherine, K. M. Sunjiv Soyjaudah, and Insah Bhurtah
- Subjects
Block code ,Theoretical computer science ,Computer science ,Concatenated error correction code ,Turbo code ,List decoding ,Data_CODINGANDINFORMATIONTHEORY ,Sequential decoding ,Serial concatenated convolutional codes ,Low-density parity-check code ,Algorithm ,Factor graph ,Computer Science::Information Theory - Abstract
Low-density parity-check (LDPC) codes are one of the most powerful error-correcting codes for wireless communication of high-speed data. This power comes from an efficient decoding scheme, the sum-product algorithm (SPA), which runs over the bipartite graph defining the code. In this work, we propose to use a modified version of the decoding algorithm over the LDPC codes used in Long Term Evolution (LTE) and WiFi and demonstrate that with the same number of iterations, without any increase in time complexity, we can achieve an enhanced error-correcting performance. The only trade off of the system is the necessary storage of the decoding sets, for a total size of n s × n, where n s is the number of decoding sets used and n is the codeword length. In this work, the maximum value of n s used is 5, which makes the trade-off fairly acceptable, considering that the coding gain is obtained virtually free-of-charge.
- Published
- 2015
16. A VPN framework through multi-layer tunnels based on OpenSSH
- Author
-
K. M. Sunjiv Soyjaudah, Pierre Clarel Catherine, and Irfaan Coonjah
- Subjects
Authentication ,Computer science ,business.industry ,Wide area network ,Server ,Secure Shell ,The Internet ,Cryptography ,Encryption ,business ,Private network ,Computer network - Abstract
This paper details how to setup and test the new tunneling features of OpenSSH to establish an enhanced SSH Layer 3 VPN between three computers in a Wide Area Network (WAN) environment. The OpenSSH security feature will be explored to provide secure tunneling and different authentication methods. Using OpenSSH to built VPN will cater for security by encrypting the data transmitted across the public network to private networks. OpenSSH VPN is said to have limitations with numerous clients and therefore it is advisable to use OpenVPN when dealing with more than one clients. To the best knowledge of the authors, this is the first reported successful implementation of a wide-area network VPN using OpenSSH tunnels with multiple clients. This proof-of-concept positions OpenSSH therefore as a potential alternative to the more conventional OpenVPN.
- Published
- 2015
17. Early CU size determination in HEVC intra prediction using Average Pixel Cost
- Author
-
Pierre Clarel Catherine, Kanayah Saurty, and K. M. S. Soyjaudah
- Subjects
Reduction (complexity) ,Pixel ,Computer science ,Algorithmic efficiency ,Encoding (memory) ,Real-time computing ,Encoder ,Algorithm ,Context-adaptive binary arithmetic coding ,Harmonic Vector Excitation Coding ,Term (time) - Abstract
The HEVC (H.265) standard has brought in significant improvement in terms of coding efficiency. However, this reduction in bit rate (almost half) comes along with an increase in complexity resulting in a very high compression time. This paper proposes a novel method to reduce the complexity in the intra-mode encoding process. We have defined a new term called the Average Pixel Cost (APC) for a CU. When the APC of a CU is below a specified threshold, the splitting process is stopped and therefore the encoder no longer needs to address the smaller CUs in the quad-tree. This results in considerable time saving for the encoder with negligible deterioration in quality and bitrate. Experimental results, based on this method alone, produced an average reduction of 31.3% in compression time in HM 10.0 with an increase of 0.7% in bitrate along with a negligible decrease of PSNR (−0.03 dB) using an APC threshold of 20.
- Published
- 2014
18. Parallel concatenation of recursive systematic convolutional codes with transmission of interleaved set
- Author
-
K. M. Sunjiv Soyjaudah, Pierre Clarel Catherine, and Insah Bhurtah
- Subjects
Soft-decision decoder ,Turbo equalizer ,Theoretical computer science ,Convolutional code ,Concatenated error correction code ,Turbo code ,List decoding ,Data_CODINGANDINFORMATIONTHEORY ,Serial concatenated convolutional codes ,Sequential decoding ,Algorithm ,Mathematics - Abstract
In this work, a scheme for the parallel concatenation of convolutional codes is proposed. The systematic and interleaved sets are encoded. Unlike conventional turbo codes, however, both encoded sequences are concatenated and transmitted. At the receiving side, the first half of the corrupted sequence (corresponding to its encoded counterpart at the transmission side) is used as input to a soft-in soft-out (SISO) decoder. The second half is, in a similar manner, used by a second SISO decoder. These two decoders then use the maximum a-posteriori (MAP) algorithm for decoding. The key difference, however, with conventional turbo schemes, is that the channel metric is not removed when computing the extrinsic information for the first iteration. This is made possible since both decoders have access to an independently corrupted set. In the sequel, there is almost no gap in the performance with increased signal to noise ratio. In addition, for the remaining iterations, the proposed scheme employs conventional decoding and makes use of an inherent stopping criterion that enables the decoding procedure to be stopped once both decoders converge to the same sequence. Thus, the system makes an efficient use of iterations while reducing the computing resources required for decoding.
- Published
- 2013
19. Parallel Concatenation of LDPC Codes with Two Sets of Source Bits
- Author
-
Kms Soyjaudah and Pierre Clarel Catherine
- Subjects
Block code ,Theoretical computer science ,BCJR algorithm ,Concatenated error correction code ,Turbo code ,Data_CODINGANDINFORMATIONTHEORY ,Forward error correction ,Serial concatenated convolutional codes ,Low-density parity-check code ,Linear code ,Algorithm ,Mathematics - Abstract
Conventional attempts at using parallel concatenation for LDPC codes have not been widely successful. Interestingly, existing schemes do not rely on the concatenating architecture, but rather on the complementary profile of two carefully selected component codes. Each code individually drive the decoding process over the signal-to-noise ratio range over which it excels. In this work however, a concatenating scheme is proposed that is not limited by specific choices of component codes. In addition, the scheme also departs from conventional turbo style settings by transmitting two sets of source bits over the channel, instead of just one. At the receiving side then, two decoders are set up and share extrinsic information. The key difference however with the conventional turbo style, is that the channel information (being independent for both decoders) is not removed when computing the extrinsic information. As signal-to-noise ratio increases, the associated impact of this modification results in a valuable performance gain.
- Published
- 2011
20. Erasing Bit Nodes on the Bipartite Graph for Enhanced Performance of LDPC Codes
- Author
-
Pierre Clarel Catherine and Kms Soyjaudah
- Subjects
Theoretical computer science ,Concatenated error correction code ,List decoding ,Sequential decoding ,Serial concatenated convolutional codes ,Low-density parity-check code ,Tanner graph ,Algorithm ,Factor graph ,Coding gain ,Mathematics - Abstract
The proposed work is based on the fact that the complete set of bit nodes for an LDPC code may not always be required at the receiving side for successful decoding. A corresponding strategy is therefore built up. In contrast to common practice, the total number of iterations available is shared among different sets. The first set runs the decoding algorithm with all its bit nodes. Successive sets (in case of decoding failure) runs each with a different selection of "erased" bit nodes, leading to an overall non-monotonic behavior. The end result is a system capable of effectively dealing with the problem of cycles and trapping sets without even being aware of their existence. Reported results show an important coding gain over conventional systems.
- Published
- 2011
21. A density-based progressive edge-growth matrix creation technique for LDPC codes
- Author
-
Kms Soyjaudah and Pierre Clarel Catherine
- Subjects
Block code ,Theoretical computer science ,Metric (mathematics) ,Bipartite graph ,Graph theory ,Low-density parity-check code ,Error detection and correction ,Algorithm ,Expander code ,Factor graph ,MathematicsofComputing_DISCRETEMATHEMATICS ,Mathematics - Abstract
In this work, we propose a method specially suited for creating high rate LDPC codes. The technique employed adds the connections of the bipartite graph on an edge-by-edge basis. Unlike other progressive edge-growth methods however, we favor the use of a density metric over the conventional distance metric for the node selection process. As benchmark, the algorithm yield codes of higher rates than those obtained from bit-filling algorithms. Because of its efficient approach in filling edges on the bipartite graph however, the algorithm may also be used to produce codes (of various rates) that are very competitive in terms of error-correcting performance.
- Published
- 2010
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.