Looking for more information on how to do PHP the right way? Check out PHP: The Right Way

Matthias Noback:
Reusing domain code
Sep 05, 2018 @ 14:38:01

Matthias Nobak has continued his series of posts covering software architecture and class structure. In a previous article he talked about using interfaces and in this latest tutorial he covers the reuse of "domain code", the main logic of your application (rather than the structure).

Last week I wrote about when to add an interface to a class. The article finishes with the claim that classes from the application's domain don't usually need an interface. The reason is that domain code isn't going to be swapped out with something else. This code is the result of careful modelling work that's done based on the business domain that you work with. And even if you'd work on, say, two financial software projects in a row, you'll find that the models you produce for each of them will be different in many subtle (if not radical) ways. Paradoxically you'll find that in practice a domain model can sometimes be reused after all. There are some great examples out there. In this article I explain different scenarios of where and how reuse could work.

He then goes on to cover three "reuse" situations, providing a summary for each:

  • Reuse-in-the-small
  • Reuse-in-the-large and software diversity
  • Reuse-in-the-even-larger: reusing entire subdomains

He finishes up the article sharing some thoughts about which of the types seems more obtainable and, in his experience, useful.

tagged: domain code reuse tutorial small large larger subdomain

Link: https://matthiasnoback.nl/2018/09/reusing-domain-code/

Matthias Noback:
Combing legacy code string by string
Apr 18, 2018 @ 14:15:59

In a new post to his site Matthias Noback takes a look at legacy applications and two things that most of them seem to have in common: classes that are too large and too generic methods. In this post he discusses these two topics and some of the tactics you can use to help refactor and resolve them.

I find it very curious that legacy (PHP) code often has the following characteristics:
  • Classes with the name of a central domain concept have grown too large.
  • Methods in these classes have become very generic.

He starts by tackling the "classes too large" problem, suggesting that it's usually just a matter of developers slowly adding to existing functionality rather than introducing large chunks of code all at once. Moving on to the "generic methods" issue, he lays out a common scenario showing how a method evolves over time to repurpose it for other uses thank its original intent. He recommends "taking a step back" and picking apart the code to make the functionality more specific in the places it's used.

tagged: legacy application generic method large class tutorial

Link: https://matthiasnoback.nl/2018/04/combing-legacy-code-string-by-string/

Exakat Blog:
Largest PHP applications (2018)
Mar 19, 2018 @ 16:35:46

On the Exakat blog there's a new post that includes the details of the largest PHP applications currently available (and popular) based on their own scanning of Open Source Projects.

When testing the exakat static analysis engine, I need to run it on real code. Open Source projects are a real blessing there, since they come in different shapes and stripes. [...] Nowadays, code bases tends to be smaller, compared to more ancient applications. Components are the norm, and they impact both the development of the application, and its extension.

[...] For this survey, we collected 1885 Open Source applications, and counted only their tokens. Tokens are PHP atomic elements, that are needed to understand and run code. Comments, white spaces and delimiters were not counted, leaving only the useful tokens. Then, the more the larger is the application.

The post lists out the top 100 largest PHP applications (by tokens, not by line) including:

  • Magento2 (#6)
  • Drupal (#12)
  • Yii (#21)
  • Joomla (#36)
  • Symfony (#52)
  • Apigility (#80)

The list comes with the count of tokens and is an update of their 2016 largest PHP applications post.

tagged: large application token size project opensource scanner

Link: https://www.exakat.io/largest-php-applications-2018/

Grok Interactive:
Importing Large CSV Files with PHP Part 1: Import Using One Query
Sep 23, 2015 @ 17:19:33

The Grok Interactive blog has posted a tutorial, the first part in a series, showing you how to work with large CSV files in PHP.

Importing CSV files into your application can become a problem when the file is really big, > 65,000 rows big. Each row of the file needs to be parsed, converted into an object, and then saved to a database. All of this needs to happen within a 30 second timeout window. It may sound like an impossible task, but there are actually a couple of solutions that can solve this problem. While working on a project at Grok, I was tasked with doing exactly that.

He talks about the method he tried initially for parsing the large files, splitting it up into different files and processing them as chunks. He points out that it relies on the file system, though, and this made it difficult to debug. He finally came up with a different, more simple solution: importing the files directly into MySQL via a LOAD DATA LOCAL INFILE command. He shows how to set this up in a controller and "importer" class that handles the upload and import via the importFileContents method (complete code included). He walks through what the code is doing and includes a few notes about the configuration of the database connection to specify additional options on the PDO connection to allow the local file load.

tagged: tutorial csv file import large processing chunk mysql load file query

Link: http://www.grok-interactive.com/blog/import-large-csv-into-mysql-with-php-part-1/

ThePHP.cc:
PHPUnit 4.7 and Three Shades of Green
Jun 08, 2015 @ 17:57:25

Sebastian Bergmann has posted a guide to PHPUnit 4.7 and some of the changes/new features it introduces.

PHPUnit 4.7 introduces a couple of small improvements. For instance, PHPUnit's PHPT test runner now supports --INI-- sections, information about the PHP runtime used is now printed in verbose mode, and a warning is now printed when code coverage data is collected but no whitelist is configured.

He also talks about the support that's been added improving the output of the HTML version of the code coverage reports, showing different colors based on how well covered the lines are. He also briefly looks ahead to PHPUnit 5, the versions it will support and the plans for release.

tagged: phpunit unittest v47 small medium large coverage shade phpunit5

Link: https://thephp.cc/news/2015/06/phpunit-4-7-and-three-shades-of-green

SitePoint PHP Blog:
How to Encrypt Large Messages with Asymmetric Keys and phpseclib
Jan 20, 2015 @ 17:40:51

On the SitePoint PHP blog today David Brumbaugh shows you how to encrypt large messages with phpseclib and asymmetric keys. phpseclib is a PHP library specifically designed to handle encryption and decryption in an easy-to-use way.

Most of us understand the need to encrypt sensitive data before transmitting it. Encryption is the process of translating plaintext (i.e. normal data) into ciphertext (i.e. secret data). During encryption, plaintext information is translated to ciphertext using a key and an algorithm. To read the data, the ciphertext must be decrypted (i.e. translated back to plaintext) using a key and an algorithm. [...] A core problem to be solved with any encryption algorithm is key distribution. How do you transmit keys to those who need them in order to establish secure communication? The solution to the problem depends on the nature of the keys and algorithms.

He talks some about the difference between symmetric and asymmetric algorithms and some advice about the selection of the right one (or ones) to use in your app. He also talks briefly about the problem with RSA keys, mostly that it has limits on the amount of text it can encrypt. His solution is to "encrypt the message with a symmetric key, then asymmetrically encrypt the key and attach it to the message". He explains the encryption/decryption process step by step and starts in showing the code to make phpseclib do the work. He shows how to generate the keys, build the encrypt function and the decrypt function with about 30 lines of code each.

tagged: encrypt decrypt large message asymetric key phpseclib tutorial

Link: http://www.sitepoint.com/encrypt-large-messages-asymmetric-keys-phpseclib/

Christian Weiske:
PHP 5.6: Large file upload support
Dec 11, 2013 @ 17:09:47

Christian Weiske has posted information about a feature in the upcoming PHP 5.6 version of the language - large file upload support. This new feature allows files over 4GB to be uploaded correctly.

PHP version 5.6 brings support for file uploads larger than 2GiB. We can say "thank you" to Ralf Lang for the initial patch that fixes bug #44522 , which was open since 2008. During testing uploads of files with a size of 4 - 11GiB on my PHP-CGI setup, I noticed that files above 4GiB did not get uploaded correctly. Michael Wallner was quick to fix that bug, and now 5.6 has fully working support for big files.

PHP 5.6 is still in development and some other new features are slated to be added to it. You can find some of them listed in the RFC section of the PHP wiki.

tagged: php56 large file upload bug patch

Link: http://cweiske.de/tagebuch/php-large-file-uploads.htm

Ilia Alshanetsky's Blog:
Performance Analysis of isset() vs array_key_exists()
Mar 06, 2012 @ 14:44:31

Ilia Alshanetsky has posted about a performance difference he's found between using the isset and array_key_exists functions in PHP to see if a value exists.

At Confoo I had an interesting conversation with Guilherme Blanco regarding the fact that in Doctrine 2 they had a performance issue due to usage of array_key_exists() and how it was significantly slower than isset(). His anecdotal example was that doing isset() took 0.5 seconds, while array_key_exists() for the same operation took 5 seconds! That seemed wrong [...] so, I've decided to do a quick benchmark using a 5,000 element array.

His benchmarking code is included - it just loads up a simple data set from a file of "words" and measures the microtime between the isset and array_key_exists calls. His results do show that isset is the faster of the two (by 2.5x) but it's still a super small micro-optimization that won't gain you much in the end.

The bottom line is that if your application does not need to distinguish between an array key that does not exist and one whose value happens to be NULL you should use isset() because it happens to be a little faster.
tagged: isset arraykeyexists benchmark large array code

Link:

Kevin Schroeder's Blog:
fatal: The remote end hung up unexpectedly
Nov 04, 2011 @ 17:55:28

Kevin Schroeder has a quick tip for anyone using phpcloud.com and having trouble with git and "remote end hung up" error messages.

If you are using phpcloud.com and are experiencing errors with git [...] and you are trying to push large files (not sure what is defined as "large") you may need to change some git settings.

He points out two settings - one for Windows and the other for Linux - that increase the buffer size to handle larger files that might be included in your repository.

tagged: phpcloud git problem large file buffer size

Link:

Martin Sikora's Blog:
Storing arrays using JSON, serialize and var_export
Aug 09, 2011 @ 14:31:51

Martin Sikora was working on an application that used a large dataset (in an array) and found some interesting things in regards to PHP's resulting loading time and saving time in four different types of arrays.

Recently I was dealing with precessing and storing large arrays in PHP (around 100 000 items) and I found out some quiet surprising facts that are very useful in performance critical applications. [...] When I started looking for some benchmark I found article Cache a large array: JSON, serialize or var_export?. That is really good but I wanted to compare a few more things.

He tested with four different array types including associative with an integer value and numeric index with a string value at sizes of 10, 100, 1,000 and 10,000 items. He ran his tests with the json methods, serializing them and a var_export. There's graphs of his results for each included in the post with some interesting variations between the different array types.

tagged: json serialize varexport large array dataset benchmark

Link:


Trending Topics: