I, too, am somewhat puzzled by the last paragraph that you wrote.
Please look at this recent thread to see how to compute a definite integral, or print a list of prime numbers less than 1 million, using very few lines of Fortran code. In fact, the only “loop” that one needs in such code may be just an implied-do-list.
No, it is not. Sure there is work done in fixed-form source but that is notFORTRAN 77 (Here I am conceding the point Fortran 77 is the same as FORTRAN 77 though technically there is no such thing as the former).
The linked project SMART is only as useful as to further bolster the argument to make implicit none the default!
Mega Kudos to the authors of SMART (and LP-VICode) that they eschew the use of IMPLICIT statements in their code, they do NOT even use IMPLICIT NONE! Way to go!
They explicit declare all the variables. These authors are not in the “they” camp alluded to by @mecej4 upthread.
They explicit state in their documentation and elsewhere their coding design calls for “All real variables are DOUBLE PRECISION”. The standard for Fortran does not support this generally with its implicit mapping, sure there are non-standard compiler options that can enable this but that will be besides the point.
They state the code is written in “the common nonstandard extensions DO-ENDDO, INCLUDE, DOWHILE, lowercase characters, inline comments, names longer than 6 characters, and names containing the nonstandard character “ ”.” meaning they have already moved past ANSI FORTRAN 77 which was highly limiting to write any code even when it was published in 1978 and any and all authors immediately required extensions to write any meaningful programs,
Their goal is to refactor their code away from fixed-form source “There is also a projected version written in Fortran90 for parallel computing, though in an early developing
stage.”
Bottom-line: SMART and related family of programs (LP-VICODE, MILKYWAYHYDRA) really can do with implicit none as the default in the Fortran standard, their code is a classic example of such a need.
Yes that’s certainly part of the answer: IMHO forall has the ability to pack a lot of stuff into a one liner, that’s why I use it all the time, but it’s still a loop-like construct.
Some examples of what I think represent the “unfinished” nature of array-based syntax are :
array-based indexing is limited to 1D. If Fortran’s strength is arrays, it would seem natural that some form of multi-dimensional indexing would be possible. Something is done with select rank, but its usage seems very limited;
same thing for derived type indexing: when conditions are simple, it would be very useful to access chunks of derived types with the array functions, like
type a
integer :: i(10)
end type a
type(a) :: array(50)
integer, allocatable :: myPackedData(:,:)
myPackedData = pack(array(:)%I, some_condition_here)
It should be possible to define an accessor function so that derived types could be treated as arrays, imagine something like this:
type :: symmetric_matrix
integer, len :: n
real :: aij((n**2+n)/2)
contains
accessor, dimension(:,:), pass(i,j) :: symmetric_get
end type symmetric_matrix
contains
elemental real function symmetric_get(this, i,j) result(aij)
class(symmetric_matrix), intent(in) :: this
integer :: ptr
! compute pointer
ptr = symmetric_ptr(this%n,i,j)
aij = this%aij(ptr)
end function symmetric_det
[...] in the program
type(symmetric_matrix(10)) :: mat
real :: x
x = mat(2,3) ! use array-like accessor
anyways these are just a few ideas, I’m not pretending them to be easily doable or have considered all edge cases, but to my knowledge there hasn’t been much interest in this type of user-centric things that, with relatively little effort IMHO, would make Fortran fantastically closer to the most famous dynamic languages, but with Fortran performance
Rarely if ever does one arrive at a stage where the smallest of change yields so much benefit as it does with eliminating implicit mapping in the standard. Imagine that, one sentence is all it take!
I can’t tire but word vomit ad nauseum the following: if Fortranners start the process now, it will take close to decade from now to make it “official” from a standard point-of-view. It will take ages for processors to follow, if at all. Moreover most of the compilers will continue as they are now. There is really little to no risk for the mysterious, anonymous “they” of the @mecej4 fame, in addition to “they” having a lot of time if “they” at all wish to adopt type safety in their codes.
It will be really good for Fortran if every Fortranner pondered over this further and decided not to remain “silent” but to lend support to move Fortran forward by upvoting and writing a sentence to request implicit none be made the default in the Fortran standard.
Under the proposed change, the following standard conforming code per current standard itself becomes nonconformant:
DATA I/42/
END
and the rest of the code is irrelevant.
With the proposed change, the subprogram as shown above will instead need to be
SUBROUTINE FOO
IMPLICIT INTEGER(I-N)
DATA I/42/
INTEGER I
PRINT *, I
END
CALL FOO
END
to behave the same as per current standard. The point then remains: instead of forcing everyone else to introduce implicit none, “they” per @mecej4 “fame” need to introduce an IMPLICIT statement or stick with a processor conformant to a prior standard revision.
And section 8.6.7 verbiage on DATA statement needs no change.
Since the proposed one sentence change only impacts implicit mapping, not implicit typing which remains with this one sentence change. This is a crucial aspect to keep in mind.
Just for confirmation, the following code is currently invalid because the symbol a is given a type real by the default implicit rule, while a is declared as integer in the next line, which violates the rule in the Section 8.6.7 mentioned above (correct?)
8.6.7 DATA statement
A variable that appears in a DATA statement and has not been typed previously
shall not appear in a subsequent type declaration unless that declaration confirms
the implicit typing.
program main
data a /100/
integer a
print *, a
end
Compiling this code with gfortran-10 gives
$ gfortran-10 test.f90
test.f90:4:13:
4 | integer a
| 1
Error: Symbol 'a' at (1) already has basic type of REAL
which seems compatible with the above document. On the other hand, with the proposed change to the default implicit none, then the above code becomes now standard-conforming (correct?). To test this, I’ve tried the code with the -fimplicit-none option and it gives the expected result:
Then, I wonder what the new meaning of “has not been typed” in the above document? If it refers to implicit none, then what is the meaning of “declaration confirms the implicit typing” (even if there is no such implicit typing)?
So, it also seems to me that the above sentence does not make sense if the default is changed to implicit none (which indicates that there would also be other sentences that need careful examination)…
(But I cannot read the “standardese” very well, so my interpretation may be wrong )
Re: “Break standard conforming code is a good thing”, note Fortran 202X breaks conforming code and it was ok with the committee, why is that?
There are at least two apps in production use with a team I have worked with in industry who have conformant and functioning Fortran library code which will break on account of the change brought about by the standard committee with Fortran 202X. The following trivial reproducer gets to the underlying issue:
integer, parameter :: LENS = 10
character(len=:), allocatable :: s
s = repeat( ' ', ncopies=LENS )
write( s, "(g0)" ) "x"
print *, "len(s) = ", len(s)
print *, len(s) == LENS
end
The behavior of the program per current standard is
len(s) = 10
T
With a Fortran 202X conformant compiler, the behavior will change to
len(s) = 1
F
The above apps rely on the string length not changing due to subsequent internal I/O once they are allocated to certain desired length in another part (initialization) of the library.
I brought my concerns due to this change and its implications to the committee members several times, even during the committee meeting. But to no avail. Perhaps it is because the above two apps are not of interest to the holy “they” of the @mecej4 fame, so breaking these apps is entirely ok? This is why keep asking “For whom Fortran, for what?”
Re: “Your response was expected.” - a needless and useless comment again, veering again toward ad hominem.
I do NOT espouse breaking change generally. However implicit mapping and implied save are 2 aspects where the benefits to Fortran from doing away with them far exceed the status quo. With implicit mapping, it will be over 50 years of recognition - imagine that, 50 years! - that its use can lead to unsafe code. Few languages out there hold on to a bad feature like that.
Thanks very much for the explanation. To confirm one more point, does the phrase “has not been typed previously” mean that there is no explicit type declaration nor implicit statement (that determines the type of a) above the data statement? If so, my code above is invalid in the proposed new standard (if no compiler extension is available), while valid in the current standard? If it is invalid in the new standard, I guess the sentence in the above document (Sec.8.6.7) should be modified as
A variable that appears in a DATA statement and has not been typed previously
shall cause an error.
or similar, because the default is now implicit none and no type information is available for a when the processor meets this DATA statement. Does this understanding seem correct…? (Or, does the semantic of DATA statement itself need reconsideration…?)
I also would like to have this capability… (or more generally, a way to define new custom types that can be used in a way similar to arrays or functions). I guess many other languages can do it. In the case of D (statically typed), I think this is easily done like
struct Foo(T, int n)
{
T[n] data;
void init() { foreach (i ; 0..n) data[i] = i; }
ref T opCall( int i, int j )
{
return data[ (i <= j) ? i : j ]; // just a trivial example
}
ref T opIndex( int i, int j )
{
return data[ (i <= j) ? i : j ];
}
}
void main()
{
Foo!(double,5) foo;
foo.init();
import std.stdio;
writeln( "foo (before) = ", foo );
foo( 2, 3 ) = 100;
foo[ 4, 1 ] = 200;
writeln( "foo (after) = ", foo );
}
$ ldc2 test.d && ./test
foo (before) = Foo!(double, 5)([0, 1, 2, 3, 4])
foo (after) = Foo!(double, 5)([0, 200, 100, 3, 4])
and similar for other languages (like C++?) The above Foo looks very similar to the parameterized derived type in Fortran, plus future extension for generics. More elaborate arrays can also be defined, even with custom slice operators…
(Operator Overloading - D Programming Language)
I guess they are useful if both builtin arrays and custom arrays can be passed to the same routine via generics (with some checks for interface).
But considering my age, I am afraid I will not have much time to enjoy such capability… (so maybe for young people).
Spar theory was developed in 1915
Berry, A. (1919). “Calculation of Stresses in Aeroplane Wing Spars.” Transactions of the Royal Aeronautical Society 1: 3-33.
The following method of calculating the stresses in the spars of an aeroplane wing is essentially a simplification in form of the method given in the paper “Some Contributions to the Theory of Engineering Structures, with special reference to the Problem of the Aeroplane,” by Messrs. H. Booth and H. Bolas, issued by the Air Department of the Admiralty in April, 1915.
There are about 5 books since then that include significant reference to spar theory, including one by Livesely in the mid 1950s. Harrison in his 1973 book includes the 2D version of the theory and some Fortran code. A paper published in about 1989 by three Canadian academics extends the theory to 3D. This was a significant feat. No code just results and they published nothing more on it. Best guess some-one’s PhD thesis, but I did not find the people.
I used the results to check coding of the 3D program I extended from the 2D version and I added in plates from SM - Felippo (I need to check the name, I cannot find it at the moment - found it corrected spelling) SM is as good as it gets for plates.
The need for the spar theory comes from the observation that it is challenging in the extreme to match the measured frequencies from real bridges and beams to the standard beam theory model, there are not enough variables in the model. We measure frequencies to 1000 Hz, but really 500 is the limit.
We have measured several hundred structures now, so we have a reasonable idea of the response. Secondary problem is that a lot of structures have temperature dependent frequencies and over normal temperature ranges in a day.
If you are going to measure bridges the only important load for long term monitoring is thermal.
Thank you for the resource. The Harrison textbook from 1989 is indeed good, but I have no access to the 1973 book.
Please correct me if I am wrong, but it seems this theory is really nothing else but a theory of frameworks with Bernoulli beam elements to which the second-order effect of coupling of bending and axial force has been added. As such, this has really nothing to do with Timoshenko versus Bernoulli. The framework with axial-bending coupling could just as well use Timoshenko. By the way, at this point this is standard stuff in many textbooks. For instance Popop (1990).
I think you mean Carlos Felippa, but I don’t know what SM stands for.
I don’t know what this refers to: Does this pertain to the above mentioned coupling?
SM is the name of the Felippa program that I kindly borrowed with his permission. It is referenced in several papers that he wrote. The code was originally published in Fortran but was converted to MATLAB and other codes. I am using a Fortran version.
I have not read Popop sorry, the first 3D version I saw of the axial compression leading on from Harrison was in 1989 from three Canadian academics. There is a version from Germany, it is used in one of the major analysis packages from Europe, but it is not a complete set of equations. If someone created it before the Canadians I did not see it.
If you have a set of frequencies say standard ones for a concrete beam - 15, 19, 25 ( this last one will be temperature dependent) and say 70, it is very hard with standard Timoshenko theory to match the frequency set. You also have the problem that the structure is slowly degrading at a measurable rate. We now measure the rate.
I trust this answers your questions.
The 1973 book contains code not in the 1989 book, I have looked at both… Last time I checked you can buy the 1973 book online. It should be available as an interlibrary loan from TAMU.